Science.gov

Sample records for high-performance microdialysis-based system

  1. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  2. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  3. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  4. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  5. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  6. LANL High-Performance Data System (HPDS)

    NASA Technical Reports Server (NTRS)

    Collins, M. William; Cook, Danny; Jones, Lynn; Kluegel, Lynn; Ramsey, Cheryl

    1993-01-01

    The Los Alamos High-Performance Data System (HPDS) is being developed to meet the very large data storage and data handling requirements of a high-performance computing environment. The HPDS will consist of fast, large-capacity storage devices that are directly connected to a high-speed network and managed by software distributed in workstations. The HPDS model, the HPDS implementation approach, and experiences with a prototype disk array storage system are presented.

  7. Advanced high-performance computer system architectures

    NASA Astrophysics Data System (ADS)

    Vinogradov, V. I.

    2007-02-01

    Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

  8. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  9. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  10. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  11. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  12. A programmable MTD system with high performance

    NASA Astrophysics Data System (ADS)

    Peng, Ying-Ning; Ma, Zang-E.; Ding, Xiu-Dong; Wang, Xiu-Tan; Fu, Jeng-Yun

    A digital programmable MTD system has been developed recently. In this system slow and fast moving targets are detected by a 64-order complex FIR filter and 64-point FFT equivalent filter bank, respectively. The method which obtains land clutter CFAR threshold for every Doppler channel with very good performance is proposed. When power spectral density of land clutter has a certain cubic shape, an average signal to clutter ratio improvement factor of about 48dB could be realized in this system.

  13. High-Performance Energy Applications and Systems

    SciTech Connect

    Miller, Barton

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  14. Building Synergy: The Power of High Performance Work Systems.

    ERIC Educational Resources Information Center

    Gephart, Martha A.; Van Buren, Mark E.

    1996-01-01

    Suggests that high-performance work systems create the synergy that lets companies gain and keep a competitive advantage. Identifies the components of high-performance work systems and critical action steps for implementation. Describes the results companies such as Xerox, Lever Brothers, and Corning Incorporated have achieved by using them. (JOW)

  15. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  16. Toward a new metric for ranking high performance computing systems.

    SciTech Connect

    Heroux, Michael Allen; Dongarra, Jack.

    2013-06-01

    The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

  17. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  18. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  19. Materials integration issues for high performance fusion power systems.

    SciTech Connect

    Smith, D. L.

    1998-01-14

    One of the primary requirements for the development of fusion as an energy source is the qualification of materials for the frost wall/blanket system that will provide high performance and exhibit favorable safety and environmental features. Both economic competitiveness and the environmental attractiveness of fusion will be strongly influenced by the materials constraints. A key aspect is the development of a compatible combination of materials for the various functions of structure, tritium breeding, coolant, neutron multiplication and other special requirements for a specific system. This paper presents an overview of key materials integration issues for high performance fusion power systems. Issues such as: chemical compatibility of structure and coolant, hydrogen/tritium interactions with the plasma facing/structure/breeder materials, thermomechanical constraints associated with coolant/structure, thermal-hydraulic requirements, and safety/environmental considerations from a systems viewpoint are presented. The major materials interactions for leading blanket concepts are discussed.

  20. Los Alamos National Laboratory's high-performance data system

    SciTech Connect

    Mercier, C.; Chorn, G.; Christman, R.; Collins, B.

    1991-01-01

    Los Alamos National Laboratory is designing a High-Performance Data System (HPDS) that will provide storage for supercomputers requiring large files and fast transfer speeds. The HPDS will meet the performance requirements by managing data transfers from high-speed storage systems connected directly to a high-speed network. File and storage management software will be distributed in workstations. Network protocols will ensure reliable, wide-area network data delivery to support long-distance distributed processing. 3 refs., 2 figs.

  1. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  2. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  3. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  4. Alternative High Performance Polymers for Ablative Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Boghozian, Tane; Stackpoole, Mairead; Gonzales, Greg

    2015-01-01

    Ablative thermal protection systems are commonly used as protection from the intense heat during re-entry of a space vehicle and have been used successfully on many missions including Stardust and Mars Science Laboratory both of which used PICA - a phenolic based ablator. Historically, phenolic resin has served as the ablative polymer for many TPS systems. However, it has limitations in both processing and properties such as char yield, glass transition temperature and char stability. Therefore alternative high performance polymers are being considered including cyanate ester resin, polyimide, and polybenzoxazine. Thermal and mechanical properties of these resin systems were characterized and compared with phenolic resin.

  5. Middleware in Modern High Performance Computing System Architectures

    SciTech Connect

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2007-01-01

    A recent trend in modern high performance computing (HPC) system architectures employs ''lean'' compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC ''middleware'' software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes.

  6. The architecture of the High Performance Storage System (HPSS)

    SciTech Connect

    Teaff, D.; Coyne, B.; Watson, D.

    1995-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements of large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage systems by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  7. CVC silicon carbide high-performance optical systems

    NASA Astrophysics Data System (ADS)

    Fischer, William F., III; Foss, Colby A., Jr.

    2004-10-01

    The demand for high performance lightweight mirrors has never been greater. The coming years will require lighter and higher performance mirrors and in greater numbers than is currently available. Applications include both ground and space based telescopes, surveillance, navigation, guidance, and tracking and control systems. For instance, the total requirement for US government sponsored systems alone is projected to be greater than 200 m2/year1. Given that the total current global production capacity is on the order of 50 m2/year1, the need and opportunity to rapidly produce high quality optics is readily apparent. Key areas of concern for all these programs are not only the mission critical optical performance metrics, but also the ability to meet the timeline for deployment. As such, any potential reduction in the long lead times for manufactured optical systems and components is critical. The associated improvements with such advancements would lead to reductions in schedule and acquisition cost, as well as increased performance. Trex"s patented CVC SiC process is capable of rapidly producing high performance SiC optics for any optical system. This paper will summarize the CVC SiC production process and the current optical performance levels, as well as future areas of work.

  8. Ultra High Performance, Highly Reliable, Numeric Intensive Processors and Systems

    DTIC Science & Technology

    1989-10-01

    to design high-performance DSP/IP systems using either off-the-shelf components or application specific integrated circuitry [ ASIC ]. -9 - HSDAL . ARO...are the chirp-z transform ( CZT ) [13] and (Rader’s) Prime Factor Transform (PFT) [11]. The RNS/ CZT is being studied by a group a MITRE [14] and is given...PFT RNS/CRNS/QRNS implementation has dynamic range requirements on the order of NQ2 (vs NQ4 for the CZT and much higher for the FFT). Therefore, the

  9. High performance distributed feedback fiber laser sensor array system

    NASA Astrophysics Data System (ADS)

    He, Jun; Li, Fang; Xu, Tuanwei; Wang, Yan; Liu, Yuliang

    2009-11-01

    Distributed feedback (DFB) fiber lasers have their unique properties useful for sensing applications. This paper presents a high performance distributed feedback (DFB) fiber laser sensor array system. Four key techniques have been adopted to set up the system, including DFB fiber laser design and fabrication, interferometric wavelength shift demodulation, digital phase generated carrier (PGC) technique and dense wavelength division multiplexing (DWDM). Experimental results confirm that a high dynamic strain resolution of 305 fɛ/√Hz (@ 1 kHz) has been achieved by the proposed sensor array system. And the multiplexing of eight channel DFB fiber laser sensor array has been demonstrated. The proposed DFB fiber laser sensor array system is suitable for ultra-weak signal detection, and has potential applications in the field of petroleum seismic explorations, earthquake prediction, and security.

  10. High performance/low cost accelerator control system

    NASA Astrophysics Data System (ADS)

    Magyary, S.; Glatz, J.; Lancaster, H.; Selph, F.; Fahmie, M.; Ritchie, A.; Timossi, C.; Hinkson, C.; Benjegerdes, R.

    1980-10-01

    Implementation of a high performance computer control system tailored to the requirements of the Super HILAC accelerator is described. This system uses a distributed structure with fiber optic data links; multiple CPUs operate in parallel at each node. A large number of the latest 16 bit microcomputer boards are used to get a significant processor bandwidth. Dynamically assigned and labeled knobs together with touch screens allow a flexible and efficient operator interface. An X-Y vector graphics system allows display and labeling of real time signals as well as general plotting functions. Both the accelerator parameters and the graphics system can be driven from BASIC interactive programs in addition to the precanned user routines.

  11. High performance electrospinning system for fabricating highly uniform polymer nanofibers

    NASA Astrophysics Data System (ADS)

    Munir, Muhammad Miftahul; Iskandar, Ferry; Khairurrijal, Okuyama, Kikuo

    2009-02-01

    A high performance electrospinning system has been successfully developed for production of highly uniform polymer nanofibers. The electrospinning system employed a proportional-integral-derivative control action to maintain a constant current during the production of polyvinyl acetate (PVAc) nanofibers from a precursor solution prepared by dissolution of the PVAc powder in dimethyl formamide so that high uniformity of the nanofibers was achieved. It was found that the cone jet length observed at the end of the needle during the injection of the precursor solution and the average diameter of the nanofibers decreased with decreasing Q /I, where Q is the flow rate of the precursor solution of the nanofibers and I is the current flowing through the electrospinning system. A power law obtained from the relation between the average diameter and Q /I is in accordance with the theoretical model.

  12. Software Systems for High-performance Quantum Computing

    SciTech Connect

    Humble, Travis S; Britt, Keith A

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  13. Development of a High Performance Acousto-ultrasonic Scan System

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  14. Development of a High Performance Acousto-Ultrasonic Scan System

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2003-03-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and themographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  15. Development of a High Performance Acousto-Ultrasonic Scan System

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-10-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  16. Sustaining high performance: dynamic balancing in an otherwise unbalanced system.

    PubMed

    Wolf, Jason A

    2011-01-01

    As Ovid said, "There is nothing in the whole world which is permanent." It is this very premise that frames the discoveries in this chapter and the compelling paradox it has raised. What began as a question of how performance is sustained, unveiled a collection of core organizational paradoxes. The findings ultimately suggest that sustained high performance is not a permanent state an organization achieves, but rather it is through perpetual movement and dynamic balance that sustainability occurs. The idea of sustainability as movement is predicated on the ability of organizational members to move beyond the experience of paradox as an impediment to progress. Through holding three critical "movements"--agile/consistency, collective/individualism, and informative/inquiry--not as paradoxical, but as active polarities, the organizations in the study were able to transcend paradox, and take active steps to continuous achievement in outperforming their peers. The study, focused on a collection of hospitals across the Unites States, reveals powerful stories of care and service, of the profound grace of human capacity, and of clear actions taken to create significant results. All of this was achieved in an environment of great volatility, in essence an unbalanced system. It was the discovery of movement and ultimately of dynamic balancing that allowed the organizations to in this study to move beyond stasis to the continuous "state" of sustaining high performance.

  17. Coal-fired high performance power generating system. Final report

    SciTech Connect

    1995-08-31

    As a result of the investigations carried out during Phase 1 of the Engineering Development of Coal-Fired High-Performance Power Generation Systems (Combustion 2000), the UTRC-led Combustion 2000 Team is recommending the development of an advanced high performance power generation system (HIPPS) whose high efficiency and minimal pollutant emissions will enable the US to use its abundant coal resources to satisfy current and future demand for electric power. The high efficiency of the power plant, which is the key to minimizing the environmental impact of coal, can only be achieved using a modern gas turbine system. Minimization of emissions can be achieved by combustor design, and advanced air pollution control devices. The commercial plant design described herein is a combined cycle using either a frame-type gas turbine or an intercooled aeroderivative with clean air as the working fluid. The air is heated by a coal-fired high temperature advanced furnace (HITAF). The best performance from the cycle is achieved by using a modern aeroderivative gas turbine, such as the intercooled FT4000. A simplified schematic is shown. In the UTRC HIPPS, the conversion efficiency for the heavy frame gas turbine version will be 47.4% (HHV) compared to the approximately 35% that is achieved in conventional coal-fired plants. This cycle is based on a gas turbine operating at turbine inlet temperatures approaching 2,500 F. Using an aeroderivative type gas turbine, efficiencies of over 49% could be realized in advanced cycle configuration (Humid Air Turbine, or HAT). Performance of these power plants is given in a table.

  18. High performance cluster system design for remote sensing data processing

    NASA Astrophysics Data System (ADS)

    Shi, Yuanli; Shen, Wenming; Xiong, Wencheng; Fu, Zhuo; Xiao, Rulin

    2012-10-01

    During recent years, cluster systems have played a more important role in the architecture design of high-performance computing area which is cost-effective and efficient parallel computing system able to satisfy specific computational requirements in the earth and space sciences communities. This paper presents a powerful cluster system built by Satellite Environment Center, Ministry of Environment Protection of China that is designed to process massive remote sensing data of HJ-1 satellites automatically everyday. The architecture of this cluster system including hardware device layer, network layer, OS/FS layer, middleware layer and application layer have been given. To verify the performance of our cluster system, image registration has been chose to experiment with one scene of HJ-1 CCD sensor. The experiments of imagery registration shows that it is an effective system to improve the efficiency of data processing, which could provide a response rapidly in applications that certainly demand, such as wild land fire monitoring and tracking, oil spill monitoring, military target detection, etc. Further work would focus on the comprehensive parallel design and implementations of remote sensing data processing.

  19. A high-performance digital system for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Cui, Ziqiang; Wang, Huaxiang; Chen, Zengqiang; Xu, Yanbin; Yang, Wuqiang

    2011-05-01

    This paper describes a recently developed digital-based data acquisition system for electrical capacitance tomography (ECT). The system consists of high-capacity field-programmable gate arrays (FPGA) and fast data conversion circuits together with a specific signal processing method. In this system, digital phase-sensitive demodulation is implemented. A specific data acquisition scheme is employed to deal with residual charges in each measurement, resulting in a high signal-to-noise ratio (SNR) at high excitation frequency. A high-speed USB interface is employed between the FPGA and a host PC. Software in Visual C++ has been developed to accomplish operational functions. Various tests were performed to evaluate the system, e.g. frame rate, SNR, noise level, linearity, and static and dynamic imaging. The SNR is 60.3 dB at 1542 frames s-1 for a 12-electrode sensor. The mean absolute error between the measured capacitance and the linear fit value is 1.6 fF. The standard deviation of the measurements is in the order of 0.1 fF. The dynamic imaging test demonstrates the advantages of high temporal resolution of the system. The experimental results indicate that the digital signal processing devices can be used to construct a high-performance ECT system.

  20. Three-Dimensional Electrodes for High-Performance Bioelectrochemical Systems

    PubMed Central

    Yu, Yang-Yang; Zhai, Dan-Dan; Si, Rong-Wei; Sun, Jian-Zhong; Liu, Xiang; Yong, Yang-Chun

    2017-01-01

    Bioelectrochemical systems (BES) are groups of bioelectrochemical technologies and platforms that could facilitate versatile environmental and biological applications. The performance of BES is mainly determined by the key process of electron transfer at the bacteria and electrode interface, which is known as extracellular electron transfer (EET). Thus, developing novel electrodes to encourage bacteria attachment and enhance EET efficiency is of great significance. Recently, three-dimensional (3D) electrodes, which provide large specific area for bacteria attachment and macroporous structures for substrate diffusion, have emerged as a promising electrode for high-performance BES. Herein, a comprehensive review of versatile methodology developed for 3D electrode fabrication is presented. This review article is organized based on the categorization of 3D electrode fabrication strategy and BES performance comparison. In particular, the advantages and shortcomings of these 3D electrodes are presented and their future development is discussed. PMID:28054970

  1. High-performance work systems and occupational safety.

    PubMed

    Zacharatos, Anthea; Barling, Julian; Iverson, Roderick D

    2005-01-01

    Two studies were conducted investigating the relationship between high-performance work systems (HPWS) and occupational safety. In Study 1, data were obtained from company human resource and safety directors across 138 organizations. LISREL VIII results showed that an HPWS was positively related to occupational safety at the organizational level. Study 2 used data from 189 front-line employees in 2 organizations. Trust in management and perceived safety climate were found to mediate the relationship between an HPWS and safety performance measured in terms of personal-safety orientation (i.e., safety knowledge, safety motivation, safety compliance, and safety initiative) and safety incidents (i.e., injuries requiring first aid and near misses). These 2 studies provide confirmation of the important role organizational factors play in ensuring worker safety.

  2. Integrated microfluidic systems for high-performance genetic analysis.

    PubMed

    Liu, Peng; Mathies, Richard A

    2009-10-01

    Driven by the ambitious goals of genome-related research, fully integrated microfluidic systems have developed rapidly to advance biomolecular and, in particular, genetic analysis. To produce a microsystem with high performance, several key elements must be strategically chosen, including device materials, temperature control, microfluidic control, and sample/product transport integration. We review several significant examples of microfluidic integration in DNA sequencing, gene expression analysis, pathogen detection, and forensic short tandem repeat typing. The advantages of high speed, increased sensitivity, and enhanced reliability enable these integrated microsystems to address bioanalytical challenges such as single-copy DNA sequencing, single-cell gene expression analysis, pathogen detection, and forensic identification of humans in formats that enable large-scale and point-of-analysis applications.

  3. High performance embedded system for real-time pattern matching

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  4. Study of High-Performance Satellite Bus System

    NASA Astrophysics Data System (ADS)

    Shirai, Tatsuya; Noda, Atsushi; Tsuiki, Atsuo

    2002-01-01

    Speaking of Low Earth Orbit (LEO) satellites like earth observation satellites, the light-weighing and high performance bus system will make an great contribution to mission components development.Also, the rising ratio of payload to total mass will reduce the launch cost.Office of Research and Development in National Space Development Agency of Japan (NASDA) is studying such a sophisticated satellite bus system.The system is expected to consist of the following advanced components and subsystems which in parallel have been developed from the element level by the Office. (a) Attitude control system (ACS) This subsystem will provide function to very accurately determine and control the satellite attitude with a next generation star tracker, a GPS receiver, and the onboard software to achieve this function. (b) Electric power system (EPS) This subsystem will be getting much lighter and powerful by utilizing the more efficient solar battery cell, power MOS FET, and DC/DC converter.Besides, to cumulate and supply the power, the Office will also study a Litium battery for space which is light and small enough to contribute to reducing size and weight of the EPS. (c) Onboard computing system (OCS) This computing system will provide function of high speed processing.The MPU (Multi Processing Unit) cell in the OCS is capable of executing approximately 200 MIPS (Mega Instructions Per Second).The OCS will play an important role not only enough for the ACS to function well but also enough for the image processing data to be handled. (d) Thermal control system (TCS) As a thermal control system, mission-friendly system is under study.A small hybrid fluid thermal control system that the Office is studying with a combination of mechanical pump loop and capillary pump loop will be robust to change of thermal loads and facilitate the system to control the temperature. (e) Communications system (CS) In order to transmit high rate data, the office is studying an optical link system

  5. Using distributed OLTP technology in a high performance storage system

    SciTech Connect

    Tyler, T.W.; Fisher, D.S.

    1995-03-01

    The design of scaleable mass storage systems requires various system components to be distributed across multiple processors. Most of these processes maintain persistent database-type information (i.e., metadata) on the resources they are responsible for managing (e.g., bitfiles, bitfile segments, physical volumes, virtual volumes, cartridges, etc.). These processes all participate in fulfilling end-user requests and updating metadata information. A number of challenges arise when distributed processes attempt to maintain separate metadata resources with production-level integrity and consistency. For example, when requests fail, metadata changes made by the various processes must be aborted or rolled back. When requests are successful, all metadata changes must be committed together. If all metadata changes cannot be committed together for some reason, then all metadata changes must be rolled back to the previous consistent state. Lack of metadata consistency jeopardizes storage system integrity. Distributed on-line transaction processing (OLTP) technology can be applied to distributed mass storage systems as the mechanism for managing the consistency of distributed metadata. OLTP concepts are familiar to manN, industries such as banking and financial services but are less well known and understood in scientific and technical computing. As mass storage systems and other products are designed using distributed processing and data-management strategies for performance, scalability, and/or availability reasons, distributed OLTP technology can be applied to solve the inherent challenges raised by such environments. This paper discusses the benefits in using distributed transaction processing products. Design and implementation experiences using the Encina OLTP product from Transarc in the High Performance Storage System are presented in more detail as a case study for how this technology can be applied to mass storage systems designed for distributed environments.

  6. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  7. Coal-fired high performance power generating system

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO[sub x] SO [sub x] and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW[sub e] combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO[sub x] production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  8. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH PERFORMANCE POWER SYSTEMS

    SciTech Connect

    1998-10-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, Al. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. Preliminary process design was started with respect to the integrated test program at the PSDF. All of the construction tasks at Foster Wheeler's Combustion and Environmental Test

  9. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  10. High-Performance Scanning Acousto-Ultrasonic System

    NASA Technical Reports Server (NTRS)

    Roth, Don; Martin, Richard; Kautz, Harold; Cosgriff, Laura; Gyekenyesi, Andrew

    2006-01-01

    A high-performance scanning acousto-ultrasonic system, now undergoing development, is designed to afford enhanced capabilities for imaging microstructural features, including flaws, inside plate specimens of materials. The system is expected to be especially helpful in analyzing defects that contribute to failures in polymer- and ceramic-matrix composite materials, which are difficult to characterize by conventional scanning ultrasonic techniques and other conventional nondestructive testing techniques. Selected aspects of the acousto-ultrasonic method have been described in several NASA Tech Briefs articles in recent years. Summarizing briefly: The acousto-ultrasonic method involves the use of an apparatus like the one depicted in the figure (or an apparatus of similar functionality). Pulses are excited at one location on a surface of a plate specimen by use of a broadband transmitting ultrasonic transducer. The stress waves associated with these pulses propagate along the specimen to a receiving transducer at a different location on the same surface. Along the way, the stress waves interact with the microstructure and flaws present between the transducers. The received signal is analyzed to evaluate the microstructure and flaws. The specific variant of the acousto-ultrasonic method implemented in the present developmental system goes beyond the basic principle described above to include the following major additional features: Computer-controlled motorized translation stages are used to automatically position the transducers at specified locations. Scanning is performed in the sense that the measurement, data-acquisition, and data-analysis processes are repeated at different specified transducer locations in an array that spans the specimen surface (or a specified portion of the surface). A pneumatic actuator with a load cell is used to apply a controlled contact force. In analyzing the measurement data for each pair of transducer locations in the scan, the total

  11. Low-Cost, High-Performance Hall Thruster Support System

    NASA Technical Reports Server (NTRS)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  12. High performance quarter-inch cartridge tape systems

    NASA Technical Reports Server (NTRS)

    Schwarz, Ted

    1993-01-01

    Within the established low cost structure of Data Cartridge drive technology, it is possible to achieve nearly 1 terrabyte (10(exp 12)) of data capacity and more than 1 Gbit/sec (greater than 100 Mbytes/sec) transfer rates. The desirability to place this capability within a single cartridge will be determined by the market. The 3.5 in. or smaller form factor may suffice to serve both the current Data Cartridge market and a high performance segment. In any case, Data Cartridge Technology provides a strong sustainable technology growth path in the 21st century.

  13. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  14. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    PubMed

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  15. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  16. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    Requirements are carefully described in descriptions of systems to be acquired but often there is no requirement to provide measurements and performance monitoring to ensure that requirements are met over the long term after acceptance. A set of measurements for various UNIX-based systems will be available at the 1992 Goddard Conference on Mass Storage Systems and Technologies. The authors invite others to contribute to the set of measurements. The framework for presenting the measurements of supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them are given. Production control and database systems are also included. Though other applications and third party software systems are not addressed, it is important to measure them as well. The capability to integrate measurements from all these components from different vendors, and from the third party software systems was recognized and there are efforts to standardize a framework to do this. The measurement activity falls into the domain of management standards. Standards work is ongoing for Open Systems Interconnection (OSI) systems management; AT&T, Digital, and Hewlett-Packard are developing management systems based on this architecture even though it is not finished. Another effort is in the UNIX International Performance Management Working Group. In addition, there are the Open Systems Foundation's Distributed Management Environment and the Object Management Group. A paper comparing the OSI systems management model and the Object Management Group model has been written. The IBM world has had a capability for measurement for various IBM systems since the 1970's and different vendors were able to develop tools for analyzing and viewing these measurements. Since IBM was the only vendor, the user groups were able to lobby IBM for the kinds of measurements needed. In the UNIX world of multiple vendors, a common set of measurements will not be as easy to get.

  17. High-performance multimedia encryption system based on chaos.

    PubMed

    Hasimoto-Beltrán, Rogelio

    2008-06-01

    Current chaotic encryption systems in the literature do not fulfill security and performance demands for real-time multimedia communications. To satisfy these demands, we propose a generalized symmetric cryptosystem based on N independently iterated chaotic maps (N-map array) periodically perturbed with a three-level perturbation scheme and a double feedback (global and local) to increase the system's robustness to attacks. The first- and second-level perturbations make cryptosystem extremely sensitive to changes in the plaintext data since the system's output itself (ciphertext global feedback) is used in the perturbation process. Third-level perturbation is a system reset, in which the system-key and chaotic maps are replaced for totally new values. An analysis of the proposed scheme regarding its vulnerability to attacks, statistical properties, and implementation performance is presented. To the best of our knowledge we provide a secure cryptosystem with one of the highest levels of performance for real-time multimedia communications.

  18. High-performance multimedia encryption system based on chaos

    NASA Astrophysics Data System (ADS)

    Hasimoto-Beltrán, Rogelio

    2008-06-01

    Current chaotic encryption systems in the literature do not fulfill security and performance demands for real-time multimedia communications. To satisfy these demands, we propose a generalized symmetric cryptosystem based on N independently iterated chaotic maps (N-map array) periodically perturbed with a three-level perturbation scheme and a double feedback (global and local) to increase the system's robustness to attacks. The first- and second-level perturbations make cryptosystem extremely sensitive to changes in the plaintext data since the system's output itself (ciphertext global feedback) is used in the perturbation process. Third-level perturbation is a system reset, in which the system-key and chaotic maps are replaced for totally new values. An analysis of the proposed scheme regarding its vulnerability to attacks, statistical properties, and implementation performance is presented. To the best of our knowledge we provide a secure cryptosystem with one of the highest levels of performance for real-time multimedia communications.

  19. High performance control of harmonic instability from HVDC link system

    SciTech Connect

    Min, W.K.; Yoo, M.H.

    1995-12-31

    This paper investigates the usefulness of novel control method for HVDC link system which suffers from severe condition of low order harmonic. This control scheme is used the feedforward control method which is directly controlled dc current at dc link system. The studies of this paper are aimed to improving the dynamic response of HVdc link system in disturbances such as faults. To achieve those objectives, digital time domain simulations are employed by the electro magnetic transient program for dc system (EMTDC). This method results in stable recovery from faults at both rectifier and inverter terminal busbars for a HVdc system that is inherently unstable. It has been found to be robust and control performance has been enhanced.

  20. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  1. High Performance Drying System Using Absorption Temperature Amplifier

    NASA Astrophysics Data System (ADS)

    Nishimura, Nobuya; Nomura, Tomohiro; Yabushita, Akihiro; Kashiwagi, Takao

    A computer simulation has been developed on transient drying process in order to predict the dynamic thermal performance of a new superheated steam drying system using an absorption type temperature amplifier as a steam superheater. A feature of this drying system is that one can reuse the exhausted superheated stream conventionally discharged from the dryer as a driving heat source for the generator in this heat pump. But in the transient drying process, the evaporation of moisture sharply decreases. Accordingly, it is hardly expected to reuse an exhausted superheated steam as heating source for the generator. 80 the effects of this exhausted superheated steam and of changes in hot water and the cooling water temperatures were mainly investigated checking whether this drying system can be driven directly by the low level energy of sun or waste heat. Furthermore, the system performances of this drying system were evaluated on a qualitative-basis by using the exergy efficiency. The results show that, under the transient drying conditions, the temperature boost of superheated steam is possible at a high temperature and thus the absorption type temperature amplifier can be an effective steam superheater system.

  2. Toward high performance radioisotope thermophotovoltaic systems using spectral control

    NASA Astrophysics Data System (ADS)

    Wang, Xiawa; Chan, Walker; Stelmakh, Veronika; Celanovic, Ivan; Fisher, Peter

    2016-12-01

    This work describes RTPV-PhC-1, an initial prototype for a radioisotope thermophotovoltaic (RTPV) system using a two-dimensional photonic crystal emitter and low bandgap thermophotovoltaic (TPV) cell to realize spectral control. We validated a system simulation using the measurements of RTPV-PhC-1 and its comparison setup RTPV-FlatTa-1 with the same configuration except a polished tantalum emitter. The emitter of RTPV-PhC-1 powered by an electric heater providing energy equivalent to one plutonia fuel pellet reached 950 °C with 52 W of thermal input power and produced 208 mW output power from 1 cm2 TPV cell. We compared the system performance using a photonic crystal emitter to a polished flat tantalum emitter and found that spectral control with the photonic crystal was four times more efficient. Based on the simulation, with more cell areas, better TPV cells, and improved insulation design, the system powered by a fuel pellet equivalent heat source is expected to reach an efficiency of 7.8%.

  3. Low cost, high performance, self-aligning miniature optical systems

    PubMed Central

    Kester, Robert T.; Christenson, Todd; Kortum, Rebecca Richards; Tkaczyk, Tomasz S.

    2009-01-01

    The most expensive aspects in producing high quality miniature optical systems are the component costs and long assembly process. A new approach for fabricating these systems that reduces both aspects through the implementation of self-aligning LIGA (German acronym for lithographie, galvanoformung, abformung, or x-ray lithography, electroplating, and molding) optomechanics with high volume plastic injection molded and off-the-shelf glass optics is presented. This zero alignment strategy has been incorporated into a miniature high numerical aperture (NA = 1.0W) microscope objective for a fiber confocal reflectance microscope. Tight alignment tolerances of less than 10 μm are maintained for all components that reside inside of a small 9 gauge diameter hypodermic tubing. A prototype system has been tested using the slanted edge modulation transfer function technique and demonstrated to have a Strehl ratio of 0.71. This universal technology is now being developed for smaller, needle-sized imaging systems and other portable point-of-care diagnostic instruments. PMID:19543344

  4. American Models of High-Performance Work Systems.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    1993-01-01

    Looks at work systems that draw on quality engineering and management concepts and use incentives. Discusses how some U.S. companies improve performance and maintain high quality. Suggests that the federal government strategy should include measures to support change in production processes and promote efficient factors of production. (JOW)

  5. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  6. Nanostructured microfluidic digestion system for rapid high-performance proteolysis.

    PubMed

    Cheng, Gong; Hao, Si-Jie; Yu, Xu; Zheng, Si-Yang

    2015-02-07

    A novel microfluidic protein digestion system with a nanostructured and bioactive inner surface was constructed by an easy biomimetic self-assembly strategy for rapid and effective proteolysis in 2 minutes, which is faster than the conventional overnight digestion methods. It is expected that this work would contribute to rapid online digestion in future high-throughput proteomics.

  7. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  8. A High Performance Content Based Recommender System Using Hypernym Expansion

    SciTech Connect

    Potok, Thomas E; Patton, Robert M

    2015-10-20

    There are two major limitations in content-based recommender systems, the first is accurately measuring the similarity of preferred documents to a large set of general documents, and the second is over-specialization which limits the "interesting" documents recommended from a general document set. To address these issues, we propose combining linguistic methods and term frequency methods to improve overall performance and recommendation.

  9. Resolution of a High Performance Cavity Beam Positron Monitor System

    SciTech Connect

    Walston, S.; Chung, C.; Fitsos, P.; Gronberg, J.; Ross, M.; Khainovski, O.; Kolomensky, Y.; Loscutoff, P.; Slater, M.; Thomson, M.; Ward, D.; Boogert, S.; Vogel, V.; Meller, R.; Lyapin, A.; Malton, S.; Miller, D.; Frisch, J.; Hinton, S.; May, J.; McCormick, D.; /SLAC /Caltech /KEK, Tsukuba

    2007-07-06

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved--ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  10. Resolution of a High Performance Cavity Beam Position Monitor System

    SciTech Connect

    Walston, S; Chung, C; Fitsos, P; Gronberg, J; Ross, M; Khainovski, O; Kolomensky, Y; Loscutoff, P; Slater, M; Thomson, M; Ward, D; Boogert, S; Vogel, V; Meller, R; Lyapin, A; Malton, S; Miller, D; Frisch, J; Hinton, S; May, J; McCormick, D; Smith, S; Smith, T; White, G; Orimoto, T; Hayano, H; Honda, Y; Terunuma, N; Urakawa, J

    2005-09-12

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved - ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  11. Fitting modular reconnaissance systems into modern high-performance aircraft

    NASA Astrophysics Data System (ADS)

    Stroot, Jacquelyn R.; Pingel, Leslie L.

    1990-11-01

    The installation of the Advanced Tactical Air Reconnaissance System (ATARS) in the F/A-18D(RC) presented a complex set of design challenges. At the time of the F/A-18D(RC) ATARS option exercise, the design and development of the ATARS subsystems and the parameters of the F/A-18D(RC) were essentially fixed. ATARS is to be installed in the gun bay of the F/A-18D(RC), taking up no additional room, nor adding any more weight than what was removed. The F/A-18D(RC) installation solution required innovations in mounting, cooling, and fit techniques, which made constant trade study essential. The successful installation in the F/A-18D(RC) is the result of coupling fundamental design engineering with brainstorming and nonstandard approaches to every situation. ATARS is sponsored by the Aeronautical Systems Division, Wright-Patterson AFB, Ohio. The F/A-18D(RC) installation is being funded to the Air Force by the Naval Air Systems Command, Washington, D.C.

  12. High-performance space shuttle auxiliary propellant valve system

    NASA Technical Reports Server (NTRS)

    Smith, G. M.

    1973-01-01

    Several potential valve closures for the space shuttle auxiliary propulsion system (SS/APS) were investigated analytically and experimentally in a modeling program. The most promising of these were analyzed and experimentally evaluated in a full-size functional valve test fixture of novel design. The engineering investigations conducted for both model and scale evaluations of the SS/APS valve closures and functional valve fixture are described. Preliminary designs, laboratory tests, and overall valve test fixture designs are presented, and a final recommended flightweight SS/APS valve design is presented.

  13. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Mcewan, S. D.; Spry, A. J.

    1984-01-01

    An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives.

  14. A high performance pneumatic braking system for heavy vehicles

    NASA Astrophysics Data System (ADS)

    Miller, Jonathan I.; Cebon, David

    2010-12-01

    Current research into reducing actuator delays in pneumatic brake systems is opening the door for advanced anti-lock braking algorithms to be used on heavy goods vehicles. However, these algorithms require the knowledge of variables that are impractical to measure directly. This paper introduces a sliding mode braking force observer to support a sliding mode controller for air-braked heavy vehicles. The performance of the observer is examined through simulations and field testing of an articulated heavy vehicle. The observer operated robustly during single-wheel vehicle simulations, and provided reasonable estimates of surface friction from test data. The effect of brake gain errors on the controller and observer are illustrated, and a recursive least squares estimator is derived for the brake gain. The estimator converged within 0.3 s in simulations and vehicle trials.

  15. Dynamic Thermal Management for High-Performance Storage Systems

    SciTech Connect

    Kim, Youngjae; Gurumurthi, Dr Sudhanva; Sivasubramaniam, Anand

    2012-01-01

    Thermal-aware design of disk drives is important because high temperatures can cause reliability problems. Dynamic Thermal Management (DTM) techniques have been proposed to operate the disk at the average case temperature, rather than at the worse case by modulating the activities to avoid thermal emergencies. The thermal emergencies can be caused by unexpected events, such as fan-breaks, increased inlet air temperature, etc. One of the DTM techniques is a delay-based approach that adjusts the disk seek activities, cooling down the disk drives. Even if such a DTM approach could overcome thermal emergencies without stopping disk activity, it suffers from long delays when servicing the requests. Thus, in this chapter, we investigate the possibility of using a multispeed disk-drive (called dynamic rotations per minute (DRPM)) that dynamically modulates the rotational speed of the platter for implementing the DTM technique. Using a detailed performance and thermal simulator of a storage system, we evaluate two possible DTM policies (- time-based and watermark-based) with a DRPM disk-drive and observe that dynamic RPM modulation is effective in avoiding thermal emergencies. However, we find that the time taken to transition between different rotational speeds of the disk is critical for the effectiveness of the DRPM based DTM techniques.

  16. Building High-Performing and Improving Education Systems. Systems and Structures: Powers, Duties and Funding. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    This Review looks at the way high-performing and improving education systems share out power and responsibility. Resources--in the form of funding, capital investment or payment of salaries and other ongoing costs--are some of the main levers used to make policy happen, but are not a substitute for well thought-through and appropriate policy…

  17. NFS as a user interface to a high-performance data system

    SciTech Connect

    Mercier, C.W.

    1991-01-01

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  18. Resource-Efficient Data-Intensive System Designs for High Performance and Capacity

    DTIC Science & Technology

    2015-09-01

    Resource- Efficient Data-Intensive System Designs for High Performance and Capacity Hyeontaek Lim CMU-CS-15-132 September 2015 School of Computer...00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Resource- Efficient Data-Intensive System Designs for High Performance and Capacity 5a. CONTRACT...query processing and 5.7X higher capacity than the previous state-of-the-art system . It employs new memory- efficient indexing schemes including ECT

  19. Research into the interaction between high performance and cognitive skills in an intelligent tutoring system

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.

    1991-01-01

    Two intelligent tutoring systems were developed. These tutoring systems are being used to study the effectiveness of intelligent tutoring systems in training high performance tasks and the interrelationship of high performance and cognitive tasks. The two tutoring systems, referred to as the Console Operations Tutors, were built using the same basic approach to the design of an intelligent tutoring system. This design approach allowed researchers to more rapidly implement the cognitively based tutor, the OMS Leak Detect Tutor, by using the foundation of code generated in the development of the high performance based tutor, the Manual Select Keyboard (MSK). It is believed that the approach can be further generalized to develop a generic intelligent tutoring system implementation tool.

  20. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  1. PISA and High-Performing Education Systems: Explaining Singapore's Education Success

    ERIC Educational Resources Information Center

    Deng, Zongyi; Gopinathan, S.

    2016-01-01

    Singapore's remarkable performance in Programme for International Student Assessment (PISA) has placed it among the world's high-performing education systems (HPES). In the literature on HPES, its "secret formula" for education success is explained in terms of teacher quality, school leadership, system characteristics and educational…

  2. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  3. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    ERIC Educational Resources Information Center

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  4. High-Performance Work Systems and School Effectiveness: The Case of Malaysian Secondary Schools

    ERIC Educational Resources Information Center

    Maroufkhani, Parisa; Nourani, Mohammad; Bin Boerhannoeddin, Ali

    2015-01-01

    This study focuses on the impact of high-performance work systems on the outcomes of organizational effectiveness with the mediating roles of job satisfaction and organizational commitment. In light of the importance of human resource activities in achieving organizational effectiveness, we argue that higher employees' decision-making capabilities…

  5. High Performance Work Systems and Organizational Outcomes: The Mediating Role of Information Quality.

    ERIC Educational Resources Information Center

    Preuss, Gil A.

    2003-01-01

    A study of the effect of high-performance work systems on 935 nurses and 182 nurses aides indicated that quality of decision-making information depends on workers' interpretive skills and partially mediated effects of work design and total quality management on organizational performance. Providing relevant knowledge and opportunities to use…

  6. Unlocking the Black Box: Exploring the Link between High-Performance Work Systems and Performance

    ERIC Educational Resources Information Center

    Messersmith, Jake G.; Patel, Pankaj C.; Lepak, David P.

    2011-01-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level…

  7. Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD

    NASA Astrophysics Data System (ADS)

    Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo

    Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.

  8. Evolution of a high-performance storage system based on magnetic tape instrumentation recorders

    NASA Technical Reports Server (NTRS)

    Peters, Bruce

    1993-01-01

    In order to provide transparent access to data in network computing environments, high performance storage systems are getting smarter as well as faster. Magnetic tape instrumentation recorders contain an increasing amount of intelligence in the form of software and firmware that manages the processes of capturing input signals and data, putting them on media and then reproducing or playing them back. Such intelligence makes them better recorders, ideally suited for applications requiring the high-speed capture and playback of large streams of signals or data. In order to make recorders better storage systems, intelligence is also being added to provide appropriate computer and network interfaces along with services that enable them to interoperate with host computers or network client and server entities. Thus, recorders are evolving into high-performance storage systems that become an integral part of a shared information system. Data tape has embarked on a program with the Caltech sponsored Concurrent Supercomputer Consortium to develop a smart mass storage system. Working within the framework of the emerging IEEE Mass Storage System Reference Model, a high-performance storage system that works with the STX File Server to provide storage services for the Intel Touchstone Delta Supercomputer is being built. Our objective is to provide the required high storage capacity and transfer rate to support grand challenge applications, such as global climate modeling.

  9. High Performance Variable Speed Drive System and Generating System with Doubly Fed Machines

    NASA Astrophysics Data System (ADS)

    Tang, Yifan

    Doubly fed machines are another alternative for variable speed drive systems. The doubly fed machines, including doubly fed induction machine, self-cascaded induction machine and doubly excited brushless reluctance machine, have several attractive advantages for variable speed drive applications, the most important one being the significant cost reduction with a reduced power converter rating. With a better understanding, improved machine design, flexible power converters and innovated controllers, the doubly fed machines could favorably compete for many applications, which may also include variable speed power generations. The goal of this research is to enhance the attractiveness of the doubly fed machines for both variable speed drive and variable speed generator applications. Recognizing that wind power is one of the favorable clean, renewable energy sources that can contribute to the solution to the energy and environment dilemma, a novel variable-speed constant-frequency wind power generating system is proposed. By variable speed operation, energy capturing capability of the wind turbine is improved. The improvement can be further enhanced by effectively utilizing the doubly excited brushless reluctance machine in slip power recovery configuration. For the doubly fed machines, a stator flux two -axis dynamic model is established, based on which a flexible active and reactive power control strategy can be developed. High performance operation of the drive and generating systems is obtained through advanced control methods, including stator field orientation control, fuzzy logic control and adaptive fuzzy control. System studies are pursued through unified modeling, computer simulation, stability analysis and power flow analysis of the complete drive system or generating system with the machine, the converter and the control. Laboratory implementations and tested results with a digital signal processor system are also presented.

  10. Unlocking the black box: exploring the link between high-performance work systems and performance.

    PubMed

    Messersmith, Jake G; Patel, Pankaj C; Lepak, David P; Gould-Williams, Julian

    2011-11-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.

  11. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  12. Super computers in astrophysics and High Performance simulations of self-gravitating systems

    NASA Astrophysics Data System (ADS)

    Capuzzo-Dolcetta, R.; Di Matteo, P.; Miocchi, P.

    The modern study of the dynamics of stellar systems requires the use of high-performance computers. Indeed, an accurate modelization of the structure and evolution of self-gravitating systems like planetary systems, open clusters, globular clusters and galaxies imply the evaluation of body-body interaction over the whole size of the structure, a task that is computationally very expensive, in particular when it is performed over long intervals of time. In this report we give a concise overview of the main problems of stellar systems simulations and present some exciting results we obtained about the interaction of globular clusters with the parent galaxy.

  13. A tutorial on the construction of high-performance resolution/paramodulation systems

    SciTech Connect

    Butler, R.; Overbeek, R.

    1990-09-01

    Over the past 25 years, researchers have written numerous deduction systems based on resolution and paramodulation. Of these systems, a very few have been capable of generating and maintaining a formula database'' containing more than just a few thousand clauses. These few systems were used to explore mechanisms for rapidly extracting limited subsets of relevant'' clauses. We have written this tutorial to reflect some of the best ideas that have emerged and to cast them in a form that makes them easily accessible to students wishing to write their own high-performance systems. 4 refs.

  14. Study on Walking Training System using High-Performance Shoes constructed with Rubber Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Y.; Kawanaka, S.; Kanezaki, K.; Doi, S.

    2016-09-01

    The number of accidental falls has been increasing among the elderly as society has aged. The main factor is a deteriorating center of balance due to declining physical performance. Another major factor is that the elderly tend to have bowlegged walking and their center of gravity position of the body tend to swing from side to side during walking. To find ways to counteract falls among the elderly, we developed walking training system to treat the gap in the center of balance. We also designed High-Performance Shoes that showed the status of a person's balance while walking. We also produced walk assistance from the insole in which insole stiffness corresponded to human sole distribution could be changed to correct the person's walking status. We constructed our High- Performances Shoes to detect pressure distribution during walking. Comparing normal sole distribution patterns and corrected ones, we confirmed that our assistance system helped change the user's posture, thereby reducing falls among the elderly.

  15. A High Performance Frequency Standard and Distribution System for Cassini Ka-Band Experiment

    DTIC Science & Technology

    2005-08-01

    spacecraft in a series of occultation measurements performed over a 78 day period from March to June 2005. I. INTRODUCTION The Cassini - Huygens project...successful Huygens landing on the moon Titan, the Cassini Spacecraft has begun a 3 year mission of continued moon flybys and observations. During this time...A High Performance Frequency Standard and Distribution System for Cassini Ka-Band Experiment R. T. WANG, M. D. CALHOUN, A. KIRK, W. A. DIENER

  16. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals

    PubMed Central

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-01-01

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers. PMID:28230153

  17. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-02-01

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers.

  18. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals.

    PubMed

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-02-23

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers.

  19. The NetLogger Methodology for High Performance Distributed Systems Performance Analysis

    SciTech Connect

    Tierney, Brian; Johnston, William; Crowley, Brian; Hoo, Gary; Brooks, Chris; Gunter, Dan

    1999-12-23

    The authors describe a methodology that enables the real-time diagnosis of performance problems in complex high-performance distributed systems. The methodology includes tools for generating precision event logs that can be used to provide detailed end-to-end application and system level monitoring; a Java agent-based system for managing the large amount of logging data; and tools for visualizing the log data and real-time state of the distributed system. The authors developed these tools for analyzing a high-performance distributed system centered around the transfer of large amounts of data at high speeds from a distributed storage server to a remote visualization client. However, this methodology should be generally applicable to any distributed system. This methodology, called NetLogger, has proven invaluable for diagnosing problems in networks and in distributed systems code. This approach is novel in that it combines network, host, and application-level monitoring, providing a complete view of the entire system.

  20. Damage-Mitigating Control of Space Propulsion Systems for High Performance and Extended Life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang

    1994-01-01

    A major goal in the control of complex mechanical system such as spacecraft rocket engine's advanced aircraft, and power plants is to achieve high performance with increased reliability, component durability, and maintainability. The current practice of decision and control systems synthesis focuses on improving performance and diagnostic capabilities under constraints that often do not adequately represent the materials degradation. In view of the high performance requirements of the system and availability of improved materials, the lack of appropriate knowledge about the properties of these materials will lead to either less than achievable performance due to overly conservative design, or over-straining of the structure leading to unexpected failures and drastic reduction of the service life. The key idea in this report is that a significant improvement in service life could be achieved by a small reduction in the system dynamic performance. The major task is to characterize the damage generation process, and then utilize this information in a mathematical form to synthesize a control law that would meet the system requirements and simultaneously satisfy the constraints that are imposed by the material and structural properties of the critical components. The concept of damage mitigation is introduced for control of mechanical systems to achieve high performance with a prolonged life span. A model of fatigue damage dynamics is formulated in the continuous-time setting, instead of a cycle-based representation, for direct application to control systems synthesis. An optimal control policy is then formulated via nonlinear programming under specified constraints of the damage rate and accumulated damage. The results of simulation experiments for the transient upthrust of a bipropellant rocket engine are presented to demonstrate efficacy of the damage-mitigating control concept.

  1. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  2. High performance frame synchronization for continuous variable quantum key distribution systems.

    PubMed

    Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua

    2015-08-24

    Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.

  3. Systems and methods for advanced ultra-high-performance InP solar cells

    DOEpatents

    Wanlass, Mark

    2017-03-07

    Systems and Methods for Advanced Ultra-High-Performance InP Solar Cells are provided. In one embodiment, an InP photovoltaic device comprises: a p-n junction absorber layer comprising at least one InP layer; a front surface confinement layer; and a back surface confinement layer; wherein either the front surface confinement layer or the back surface confinement layer forms part of a High-Low (HL) doping architecture; and wherein either the front surface confinement layer or the back surface confinement layer forms part of a heterointerface system architecture.

  4. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    SciTech Connect

    Wang, Teng; Oral, H Sarp; Wang, Yandong; Settlemyer, Bradley W; Atchley, Scott; Yu, Weikuan

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  5. A High Performance Parachute System for the Recovery of Small Space Capsules

    NASA Astrophysics Data System (ADS)

    Koldaev, V.; Moraes, P., Jr.

    2002-01-01

    A non-guided high performance parachute system has been developed and tested for the recovery of orbital payloads or space capsules. The system is safe, efficient and affordable to be used for small size vehicles. It is based on a pilot, a drag and a cluster of main parachutes and an air bag to reduce the impact. The system has been designed to keep a stable descent with velocity up to 10 m/s, and prevent failures. To assure the achievement of all these characteristics, the determination of the parachute canopies areas, inflation and flight dynamics have been considered by application of numerical optimisation of the system parameters. Due to the mainly empirical nature of parachute design and development, wind tunnel and flight tests were conducted in order to achieve high reliability imposed by user requirements. The present article describes the system and discusses in detail the design features and testing of the parachutes.

  6. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  7. Coal-fired high performance power generating system. Quarterly progress report, January 1--March 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    This report covers work carried out under Task 2, Concept Definition and Analysis, and Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: > 47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FHTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The cycle optimization effort has brought about several revisions to the system configuration resulting from: (1) the use of Illinois No. 6 coal instead of Utah Blind Canyon; (2) the use of coal rather than methane as a reburn fuel; (3) reducing radiant section outlet temperatures to 1700F (down from 1800F); and (4) the need to use higher performance (higher cost) steam cycles to offset losses introduced as more realistic operating and construction constraints are identified.

  8. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  9. High-performance electronics for time-of-flight PET systems.

    PubMed

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  10. High-performance electronics for time-of-flight PET systems

    NASA Astrophysics Data System (ADS)

    Choong, W.-S.; Peng, Q.; Vu, C. Q.; Turko, B. T.; Moses, W. W.

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  11. Coal-fired high performance power generating system. Quarterly progress report, April 1--June 30, 1993

    SciTech Connect

    Not Available

    1993-11-01

    This report covers work carried out under Task 2, Concept Definition and Analysis, Task 3, Preliminary R&D and Task 4, Commercial Generating Plant Design, under Contract AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: >47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le}25% NSPS; cost {ge}65% of heat input; all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. A survey of currently available high temperature alloys has been completed and some of their high temperature properties are shown for comparison. Several of the most promising candidates will be selected for testing to determine corrosion resistance and high temperature strength. The corrosion resistance testing of candidate refractory coatings is continuing and some of the recent results are presented. This effort will provide important design information that will ultimately establish the operating ranges of the HITAF.

  12. Simulation, Characterization, and Optimization of Metabolic Models with the High Performance Systems Biology Toolkit

    SciTech Connect

    Lunacek, M.; Nag, A.; Alber, D. M.; Gruchalla, K.; Chang, C. H.; Graf, P. A.

    2011-01-01

    The High Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization components for metabolic modeling and the means to assemble these components into large parallel processing hierarchies suiting a particular simulation and optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be configured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on {approx}1000 processors and accomplished billions of stiff ordinary differential equation simulations. This work opens the door for the systems biology metabolic modeling community to take effective advantage of large scale high performance computing resources for the first time.

  13. Towards Building High Performance Medical Image Management System for Clinical Trials.

    PubMed

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-01-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTful Web Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  14. Towards Building High Performance Medical Image Management System for Clinical Trials

    PubMed Central

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-01-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTful Web Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems. PMID:21603096

  15. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  16. a High-Performance Method for Simulating Surface Rainfall-Runoff Dynamics Using Particle System

    NASA Astrophysics Data System (ADS)

    Zhang, Fangli; Zhou, Qiming; Li, Qingquan; Wu, Guofeng; Liu, Jun

    2016-06-01

    The simulation of rainfall-runoff process is essential for disaster emergency and sustainable development. One common disadvantage of the existing conceptual hydrological models is that they are highly dependent upon specific spatial-temporal contexts. Meanwhile, due to the inter-dependence of adjacent flow paths, it is still difficult for the RS or GIS supported distributed hydrological models to achieve high-performance application in real world applications. As an attempt to improve the performance efficiencies of those models, this study presents a high-performance rainfall-runoff simulating framework based on the flow path network and a separate particle system. The vector-based flow path lines are topologically linked to constrain the movements of independent rain drop particles. A separate particle system, representing surface runoff, is involved to model the precipitation process and simulate surface flow dynamics. The trajectory of each particle is constrained by the flow path network and can be tracked by concurrent processors in a parallel cluster system. The result of speedup experiment shows that the proposed framework can significantly improve the simulating performance just by adding independent processors. By separating the catchment elements and the accumulated water, this study provides an extensible solution for improving the existing distributed hydrological models. Further, a parallel modeling and simulating platform needs to be developed and validate to be applied in monitoring real world hydrologic processes.

  17. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  18. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  19. A compilation system that integrates high performance Fortran and Fortran M

    SciTech Connect

    Foster, I.; Xu, Ming; Avalani, B.; Choudhary, A.

    1994-06-01

    Task parallelism and data parallelism are often seen as mutually exclusive approaches to parallel programming. Yet there are important classes of application, for example in multidisciplinary simulation and command and control, that would benefit from an integration of the two approaches. In this paper, we describe a programming system that we are developing to explore this sort of integration. This system builds on previous work on task-parallel and data-parallel Fortran compilers to provide an environment in which the task-parallel language Fortran M can be used to coordinate data-parallel High Performance Fortran tasks. We use an image-processing problem to illustrate the issues that arise when building an integrated compilation system of this sort.

  20. An Empirical Examination of the Mechanisms Mediating between High-Performance Work Systems and the Performance of Japanese Organizations

    ERIC Educational Resources Information Center

    Takeuchi, Riki; Lepak, David P.; Wang, Heli; Takeuchi, Kazuo

    2007-01-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human…

  1. Management of Virtual Large-scale High-performance Computing Systems

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Scott, Stephen L

    2011-01-01

    Linux is widely used on high-performance computing (HPC) systems, from commodity clusters to Cray su- percomputers (which run the Cray Linux Environment). These platforms primarily differ in their system config- uration: some only use SSH to access compute nodes, whereas others employ full resource management sys- tems (e.g., Torque and ALPS on Cray XT systems). Furthermore, latest improvements in system-level virtualization techniques, such as hardware support, virtual machine migration for system resilience purposes, and reduction of virtualization overheads, enables the usage of virtual machines on HPC platforms. Currently, tools for the management of virtual machines in the context of HPC systems are still quite basic, and often tightly coupled to the target platform. In this docu- ment, we present a new system tool for the management of virtual machines in the context of large-scale HPC systems, including a run-time system and the support for all major virtualization solutions. The proposed solution is based on two key aspects. First, Virtual System Envi- ronments (VSE), introduced in a previous study, provide a flexible method to define the software environment that will be used within virtual machines. Secondly, we propose a new system run-time for the management and deployment of VSEs on HPC systems, which supports a wide range of system configurations. For instance, this generic run-time can interact with resource managers such as Torque for the management of virtual machines. Finally, the proposed solution provides appropriate ab- stractions to enable use with a variety of virtualization solutions on different Linux HPC platforms, to include Xen, KVM and the HPC oriented Palacios.

  2. State observers and Kalman filtering for high performance vibration isolation systems

    SciTech Connect

    Beker, M. G. Bertolini, A.; Hennes, E.; Rabeling, D. S.; Brand, J. F. J. van den; Bulten, H. J.

    2014-03-15

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  3. A survey on resource allocation in high performance distributed computing systems

    SciTech Connect

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  4. State observers and Kalman filtering for high performance vibration isolation systems.

    PubMed

    Beker, M G; Bertolini, A; van den Brand, J F J; Bulten, H J; Hennes, E; Rabeling, D S

    2014-03-01

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  5. State observers and Kalman filtering for high performance vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Beker, M. G.; Bertolini, A.; van den Brand, J. F. J.; Bulten, H. J.; Hennes, E.; Rabeling, D. S.

    2014-03-01

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  6. A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles

    NASA Astrophysics Data System (ADS)

    Zhai, Yiwen; Zhang, Hui; Zhang, Lingling; Dong, Shaojun

    2016-05-01

    A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to Prussian white nanoparticles leading to a decrease of the absorption spectrum, was chosen as the target. And we were able to determine the concentration of sulfite in aqueous solution with a low detection limit and a broad linear relationship.A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to

  7. Multisensory systems integration for high-performance motor control in flies.

    PubMed

    Frye, Mark A

    2010-06-01

    Engineered tracking systems 'fuse' data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control.

  8. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  9. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  10. Multisensory systems integration for high-performance motor control in flies

    PubMed Central

    Frye, Mark A.

    2010-01-01

    Summary Engineered tracking systems ‘fuse’ data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control. PMID:20202821

  11. Users matter : multi-agent systems model of high performance computing cluster users.

    SciTech Connect

    North, M. J.; Hood, C. S.; Decision and Information Sciences; IIT

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex due to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.

  12. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    SciTech Connect

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  13. High-performance IR thermography system based on Class II Thermal Imaging Common Modules

    NASA Astrophysics Data System (ADS)

    Bell, Ian G.

    1991-03-01

    The Class II Thermal Imaging Common Modules were originally developed for the U.K. Ministry of Defence as the basis of a number of high performance thermal imaging systems for use by the British Armed Forces. These systems are characterized by high spatial resolution, high thermal resolution and real time thermal image update rate. A TICM II thermal imaging system uses a cryogenically cooled eight element Cadmium- Mercury-Telluride (CMT) SPRITE (Signal PRocessing In The Element) detector which is mechanically scanned over the thermal scene to be viewed. The TALYTHERM system is based on a modified TICM II thermal image connected to an IBM PC-AT compatible computer having image processing hardware installed and running the T.E.M.P.S. (Thermal Emission Measurement and Processing System) software package for image processing and data analysis. The operation of a TICM II thermal imager is briefly described highlighting the use of the SPRITE detector which coupled with a serial/parallel scanning technique yields high temporal, spatial and thermal resolutions. The conversion of this military thermal image into thermography system is described, including a discussion of the modifications required to a standard imager. The technique for extracting temperature information from a real time thermal image and how this is implemented in a TALYTHERM system is described. The D.A.R.T. (Discrete Attenuation of Radiance Thermography) system which is based on an extensively modified TICM II thermal imager is also described. This system is capable of measuring temperatures up to 1000 degrees C whilst maintaining the temporal and spatial resolutions inherent in a TICM II imager. Finally applications of the TALYTHERM in areas such as NDT (Non Destructive Testing), medical research and military research are briefly described.

  14. An empirical examination of the mechanisms mediating between high-performance work systems and the performance of Japanese organizations.

    PubMed

    Takeuchi, Riki; Lepak, David P; Wang, Heli; Takeuchi, Kazuo

    2007-07-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human capital and encourage a high degree of social exchange within an organization, and that these are positively related to the organization's overall performance. On the basis of a sample of Japanese establishments, the results provide support for the existence of these mediating mechanisms through which high-performance work systems affect overall establishment performance.

  15. High-performance Landsat/SPOT dual S-/X-band telemetry tracking and receiving system

    NASA Astrophysics Data System (ADS)

    Bollermann, Bruce; Harshbarger, Roger; Haynie, Mark; Pande, Kailash

    A high-performance dual S/X-band telemetry tracking and receiving system has been developed to provide a low-cost earth station for receiving high-resolution data from current and future Landsat/SPOT polar orbiting satellites. The antenna system consists of a dual Cassegrain configuration with a 10-m parabolic reflector designed for 100-mph wind loading and 10 deg/sec sq accelerations. The antenna system is mounted to a newly developed elevation-over-azimuth tracking pedestal with torque-biased drive train for each axis. This drive train provides an exceptionally wide dynamic range in tracking velocities for very slow horizon tracking and very fast velocities for near-overhead passes. A 15-km satellite pass distance from overhead is used as a control-system design criterion. For the narrow-beamwidth X-band track this requires an acceleration error of less than 0.100 deg and an acceleration error constant of at least 90/sec sq.

  16. RAPID COMMUNICATION: Novel high performance small-scale thermoelectric power generation employing regenerative combustion systems

    NASA Astrophysics Data System (ADS)

    Weinberg, F. J.; Rowe, D. M.; Min, G.

    2002-07-01

    Hydrocarbon fuels have specific energy contents some two orders of magnitude greater than any electrical storage device. They therefore proffer an ideal source in the universal quest for compact, lightweight, long-lasting alternatives for batteries to power the ever-proliferating electronic devices. The motivation lies in the need to power, for example, equipment for infantry troops, for weather stations and buoys in polar regions which need to signal their readings intermittently to passing satellites, unattended over long periods, and many others. Fuel cells, converters based on miniaturized gas turbines, and other systems under intensive study, give rise to diverse practical difficulties. Thermoelectric devices are robust, durable and have no moving parts, but tend to be exceedingly inefficient. We propose regenerative combustion systems which mitigate this impediment and are likely to make high performance small-scale thermoelectric power generation applicable in practice. The efficiency of a thermoelectric generating system using preheat when operated between ambient and 1200 K is calculated to exceed the efficiency of the best present day thermoelectric conversion system by more than 20%.

  17. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  18. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    SciTech Connect

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integrate these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.

  19. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  20. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  1. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  2. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  3. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  4. Guidelines for application of fluorescent lamps in high-performance avionic backlight systems

    NASA Astrophysics Data System (ADS)

    Syroid, Daniel D.

    1997-07-01

    Fluorescent lamps have proven to be well suited for use in high performance avionic backlight systems as demonstrated by numerous production applications for both commercial and military cockpit displays. Cockpit display applications include: Boeing 777, new 737s, F-15, F-16, F-18, F-22, C- 130, Navy P3, NASA Space Shuttle and many others. Fluorescent lamp based backlights provide high luminance, high lumen efficiency, precision chromaticity and long life for avionic active matrix liquid crystal display applications. Lamps have been produced in many sizes and shapes. Lamp diameters range from 2.6 mm to over 20 mm and lengths for the larger diameter lamps range to over one meter. Highly convoluted serpentine lamp configurations are common as are both hot and cold cathode electrode designs. This paper will review fluorescent lamp operating principles, discuss typical requirements for avionic grade lamps, compare avionic and laptop backlight designs and provide guidelines for the proper application of lamps and performance choices that must be made to attain optimum system performance considering high luminance output, system efficiency, dimming range and cost.

  5. A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF

    NASA Astrophysics Data System (ADS)

    Deatrich, D. C.; Liu, S. X.; Tafirout, R.

    2010-04-01

    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.

  6. A high performance inverter-fed drive system of an interior permanent magnet synchronous machine

    NASA Astrophysics Data System (ADS)

    Bose, B. K.

    A high performance fully operational four-quadrant control scheme of an interior permanent magnet synchronous machine is described. The machine operates smoothly with full performance in constant-torque region, as well as in flux-weakening constant-power region in both directions of motion. The transition between constant-torque region and constant-power region is very smooth at all conditions of operation. The control in constant-torque region is based on vector or field-oriented technique with the direct-axis aligned to the total stator flux, whereas the constant-power region control is implemented by orientation of torque angle of the impressed square-wave voltage through the feedforward vector rotator. The control system is implemented digitally using distributed microcomputer system and all the essential feedback signals, such as torque, flux, etc., are estimated with precision. The control has been described with an outer torque control loop primarily for traction type applications, but speed and position control loops can be easily added to extend its application to other industrial drives. A 70 hp drive system using a Neodymium-Iron-Boron PM machine and transistor PWM inverter has been designed and extensively tested in laboratory on a dynamometer, and performances are found to be excellent.

  7. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  8. Detection of HEMA in self-etching adhesive systems with high performance liquid chromatography

    NASA Astrophysics Data System (ADS)

    Panduric, V.; Tarle, Z.; Hameršak, Z.; Stipetić, I.; Matosevic, D.; Negovetić-Mandić, V.; Prskalo, K.

    2009-04-01

    One of the factors that can decrease hydrolytic stability of self-etching adhesive systems (SEAS) is 2-hydroxymethylmethacrylate (HEMA). Due to hydrolytic instability of acidic methacrylate monomers in SEAS, HEMA can be present even if the manufacturer did not include it in original composition. The aim of the study was to determine the presence of HEMA because of decomposition by hydrolysis of methacrylates during storage, resulting with loss of adhesion strength to hard dental tissues of the tooth crown. Three most commonly used SEAS were tested: AdheSE ONE, G-Bond and iBond under different storage conditions. High performance liquid chromatography analysis was performed on a Nucleosil C 18-100 5 μm (250 × 4.6 mm) column, Knauer K-501 pumps and Wellchrom DAD K-2700 detector at 215 nm. Data were collected and processed by EuroCrom 2000 HPLC software. Calibration curves were made related eluted peak area to known concentrations of HEMA (purchased from Fluka). The elution time for HEMA is 12.25 min at flow rate 1.0 ml/min. Obtained results indicate that no HEMA was present in AdheSE ONE because methacrylates are substituted with methacrylamides that seem to be more stable under acidic aqueous conditions. In all other adhesive systems HEMA was detected.

  9. Systems design of high performance stainless steels I. Conceptual and computational design

    NASA Astrophysics Data System (ADS)

    Campbell, C. E.; Olson, G. B.

    2000-10-01

    Application of a systems approach to the computational materials design led to the development of a high performance stainless steel. The systems approach highlighted the integration of processing/structure/property/ performance relations with mechanistic models to achieve desired quantitative property objectives. The mechanistic models applied to the martensitic transformation behavior included the Olson Cohen model for heterogeneous nucleation and the Ghosh Olson solid-solution strengthening model for interfacial mobility. Strengthening theory employed modeling of the coherent M2C precipitation in a BCC matrix, which is initially in a paraequilibrium with cementite condition. The calibration of the M2C coherency used available small-angle neutron scattering (SANS) data to determine a composition-dependent strain energy and a composition-independent interfacial energy. Multicomponent pH-potential diagrams provided an effective tool for evaluating oxide stability. Constrained equilibrium calculations correlated oxide stability to Cr enrichment in the metastable spinel film, allowing more efficient use of alloy Cr content. The composition constraints acquired from multicomponent solidification simulations improved castability. Then integration of the models, using multicomponent thermodynamic and diffusion software programs, enabled the design of a carburizable, secondary-hardening martensitic stainless steel for advanced bearing applications.

  10. Engineering Development of Coal-Fired High-Performance Power Systems

    SciTech Connect

    York Tsuo

    2000-12-31

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately. This report addresses the areas of technical progress for this quarter. The detail of syngas cooler design is given in this report. The final construction work of the CFB pyrolyzer pilot plant has started during this quarter. No experimental testing was performed during this quarter. The proposed test matrix for the future CFB pyrolyzer tests is given in this report. Besides testing various fuels, bed temperature will be the primary test parameter.

  11. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  12. High-performance CMOS image sensors at BAE SYSTEMS Imaging Solutions

    NASA Astrophysics Data System (ADS)

    Vu, Paul; Fowler, Boyd; Liu, Chiao; Mims, Steve; Balicki, Janusz; Bartkovjak, Peter; Do, Hung; Li, Wang

    2012-07-01

    In this paper, we present an overview of high-performance CMOS image sensor products developed at BAE SYSTEMS Imaging Solutions designed to satisfy the increasingly challenging technical requirements for image sensors used in advanced scientific, industrial, and low light imaging applications. We discuss the design and present the test results of a family of image sensors tailored for high imaging performance and capable of delivering sub-electron readout noise, high dynamic range, low power, high frame rates, and high sensitivity. We briefly review the performance of the CIS2051, a 5.5-Mpixel image sensor, which represents our first commercial CMOS image sensor product that demonstrates the potential of our technology, then we present the performance characteristics of the CIS1021, a full HD format CMOS image sensor capable of delivering sub-electron read noise performance at 50 fps frame rate at full HD resolution. We also review the performance of the CIS1042, a 4-Mpixel image sensor which offers better than 70% QE @ 600nm combined with better than 91dB intra scene dynamic range and about 1 e- read noise at 100 fps frame rate at full resolution.

  13. High-performance immunoassays based on through-stencil patterned antibodies and capillary systems.

    PubMed

    Ziegler, Jörg; Zimmermann, Martin; Hunziker, Patrick; Delamarche, Emmanuel

    2008-03-01

    We present a simple method to pattern capture antibodies (cAbs) on poly(dimethylsiloxane) (PDMS), with high accuracy and in a manner compatible with mass fabrication for use with capillary systems (CSs), using stencils microfabricated in Si. Capture antibodies are patterned as 60-270 microm wide and 2 mm long lines on PDMS and used with CSs that have been optimized for convenient handling, pipetting of solutions, pumping of liquids, such as human blood serum, and visualization of signals for fluorescence immunoassays. With the use of this method, C-reactive protein (CRP) is detected with a sensitivity of 0.9 ng mL(-1) (7.8 pM) in 1 microL of CRP-spiked human serum, within 11 min and using only four pipetting steps and a total volume of sample and reagents of 1.35 microL. This exemplifies the high performances that can be achieved using this approach and an otherwise conventional surface sandwich fluorescence immunoassay. This method is simple and flexible and should therefore be applicable to a large number of demanding immunoassays.

  14. Analysis of starch in food systems by high-performance size exclusion chromatography.

    PubMed

    Ovando-Martínez, Maribel; Whitney, Kristin; Simsek, Senay

    2013-02-01

    Starch has unique physicochemical characteristics among food carbohydrates. Starch contributes to the physicochemical attributes of food products made from roots, legumes, cereals, and fruits. It occurs naturally as distinct particles, called granules. Most starch granules are a mixture of 2 sugar polymers: a highly branched polysaccharide named amylopectin and a basically linear polysaccharide named amylose. The starch contained in food products undergoes changes during processing, which causes changes in the starch molecular weight and amylose to amylopectin ratio. The objective of this study was to develop a new, simple, 1-step, and accurate method for simultaneous determination of amylose and amylopectin ratio as well as weight-averaged molecular weights of starch in food products. Starch from bread flour, canned peas, corn flake cereal, snack crackers, canned kidney beans, pasta, potato chips, and white bread was extracted by dissolving in KOH, urea, and precipitation with ethanol. Starch samples were solubilized and analyzed on a high-performance size exclusion chromatography (HPSEC) system. To verify the identity of the peaks, fractions were collected and soluble starch and beta-glucan assays were performed additional to gas chromatography analysis. We found that all the fractions contain only glucose and soluble starch assay is correlated to the HPSEC fractionation. This new method can be used to determine amylose amylopectin ratio and weight-averaged molecular weight of starch from various food products using as low as 25 mg dry samples.

  15. Pyrolytic carbon-coated stainless steel felt as a high-performance anode for bioelectrochemical systems.

    PubMed

    Guo, Kun; Hidalgo, Diana; Tommasi, Tonia; Rabaey, Korneel

    2016-07-01

    Scale up of bioelectrochemical systems (BESs) requires highly conductive, biocompatible and stable electrodes. Here we present pyrolytic carbon-coated stainless steel felt (C-SS felt) as a high-performance and scalable anode. The electrode is created by generating a carbon layer on stainless steel felt (SS felt) via a multi-step deposition process involving α-d-glucose impregnation, caramelization, and pyrolysis. Physicochemical characterizations of the surface elucidate that a thin (20±5μm) and homogenous layer of polycrystalline graphitic carbon was obtained on SS felt surface after modification. The carbon coating significantly increases the biocompatibility, enabling robust electroactive biofilm formation. The C-SS felt electrodes reach current densities (jmax) of 3.65±0.14mA/cm(2) within 7days of operation, which is 11 times higher than plain SS felt electrodes (0.30±0.04mA/cm(2)). The excellent biocompatibility, high specific surface area, high conductivity, good mechanical strength, and low cost make C-SS felt a promising electrode for BESs.

  16. Engineering development of coal-fired high performance power systems phase 2 and 3

    SciTech Connect

    Unknown

    1999-08-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le}10% NSPS (New Source Performance Standard); coal providing {ge} 65% of heat input; all solid wastes benign; and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.2 HITAF Air Heaters; and Task 2.4 Duct Heater and Gas Turbine Integration.

  17. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1998-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard), coal providing {ge} 65% of heat input, all solid wastes benign cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAF Combustor; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  18. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-01-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input; all solid wastes benign; cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  19. A high performance frequency standard and distribution system for Cassini Ka-band experiment

    NASA Technical Reports Server (NTRS)

    Wang, Rabi T.; Calhoun, M. D.; Kirk, A.; Diener, W. A.; Dick, G. J.; Tjoelker, R. L.

    2005-01-01

    This paper provides an overview and update of a specialized frequency reference system for the NASA Deep Space Network (DSN) to support Ka-band radio science experiments with the Cassini spacecraft, currently orbiting Saturn. Three major components, a Hydrogen Maser, Stabilized Fiber-optic Distribution Assembly (SFODA), and 10 Kelvin Cryocooled Sapphire Oscillator (10K CSO) and frequency-lock-loop, are integrated to achieve the very high performance, ground based frequency reference at a remote antenna site located 16 km from the hydrogen maser. Typical measured Allan Deviation is 1.6 -14 1 0a't 1 second and 1.7 x 10 -15 at 1000 seconds averaging intervals. Recently two 10K CSOs have been compared in situ while operating at the remote DSN site DSS-25. The CSO references were used operationally to downconvert the Ka band downlink received from the Cassini spacecraft in a series of occultation measurements performed over a 78 day period from March to June 2005.

  20. Advanced Insulation for High Performance Cost-Effective Wall, Roof, and Foundation Systems Final Report

    SciTech Connect

    Costeux, Stephane; Bunker, Shanon

    2013-12-20

    The objective of this project was to explore and potentially develop high performing insulation with increased R/inch and low impact on climate change that would help design highly insulating building envelope systems with more durable performance and lower overall system cost than envelopes with equivalent performance made with materials available today. The proposed technical approach relied on insulation foams with nanoscale pores (about 100 nm in size) in which heat transfer will be decreased. Through the development of new foaming methods, of new polymer formulations and new analytical techniques, and by advancing the understanding of how cells nucleate, expand and stabilize at the nanoscale, Dow successfully invented and developed methods to produce foams with 100 nm cells and 80% porosity by batch foaming at the laboratory scale. Measurements of the gas conductivity on small nanofoam specimen confirmed quantitatively the benefit of nanoscale cells (Knudsen effect) to increase insulation value, which was the key technical hypotheses of the program. In order to bring this technology closer to a viable semi-continuous/continuous process, the project team modified an existing continuous extrusion foaming process as well as designed and built a custom system to produce 6" x 6" foam panels. Dow demonstrated for the first time that nanofoams can be produced in a both processes. However, due to technical delays, foam characteristics achieved so far fall short of the 100 nm target set for optimal insulation foams. In parallel with the technology development, effort was directed to the determination of most promising applications for nanocellular insulation foam. Voice of Customer (VOC) exercise confirmed that demand for high-R value product will rise due to building code increased requirements in the near future, but that acceptance for novel products by building industry may be slow. Partnerships with green builders, initial launches in smaller markets (e.g. EIFS

  1. HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    SciTech Connect

    Kim, Youngjae; Gupta, Aayush; Urgaonkar, Bhuvan; Piotr, Berman; Sivasubramaniam, Anand

    2011-01-01

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based solid-state drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic hard disk drives (HDDs), and can sometimes be as slow as the HDD (due to excessive garbage collection (GC) induced by high intensity of random writes). Given these trade-offs between HDDs and SSDs in terms of cost, performance, and lifetime, the current consensus among several storage experts is to view SSDs not as a replacement for HDD but rather as a complementary device within the high-performance storage hierarchy. We design and evaluate such a hybrid system called HybridStore to provide: (a) HybridPlan: improved capacity planning technique to administrators with the overall goal of operating within cost-budgets and (b) HybridDyn: improved performance/lifetime guarantees during episodes of deviations from expected workloads through two novel mechanisms: write-regulation and fragmentation busting. As an illustrative example of HybridStore s ef cacy, HybridPlan is able to nd the most cost-effective storage con guration for a large scale workload of Microsoft Research and suggest one MLC SSD with ten 7.2K RPM HDDs instead of fourteen 7.2K RPM HDDs only. HybridDyn is able to reduce the average response time for an enterprise scale random-write dominant workload by about 71% as compared to a HDD-based system.

  2. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH-PERFORMANCE POWER SYSTEMS

    SciTech Connect

    Unknown

    1999-02-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, AL. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. A general arrangement drawing of the char transfer system was forwarded to SCS for their review. Structural steel drawings were used to generate a three-dimensional model of the char

  3. Coal-fired high performance power generating system. Quarterly progress report

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO{sub x} SO {sub x} and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R&D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO{sub x} production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  4. Determination of the kinetic rate constant of cyclodextrin supramolecular systems by high performance affinity chromatography.

    PubMed

    Li, Haiyan; Ge, Jingwen; Guo, Tao; Yang, Shuo; He, Zhonggui; York, Peter; Sun, Lixin; Xu, Xu; Zhang, Jiwen

    2013-08-30

    It is challenging and extremely difficult to measure the kinetics of supramolecular systems with extensive, weak binding (Ka<10(5)M(-1)), and fast dissociation, such as those composed of cyclodextrins and drugs. In this study, a modified peak profiling method based on high performance affinity chromatography (HPAC) was established to determine the dissociation rate constant of cyclodextrin supramolecular systems. The interactions of β-cyclodextrin with acetaminophen and sertraline were used to exemplify the method. The retention times, variances and the plate heights of the peaks for acetaminophen or sertraline, conventional non-retained substance (H2O) on the β-cyclodextrin bonded column and a control column were determined at four flow rates under linear elution conditions. Then, plate heights for the theoretical non-retained substance were estimated by the modified HPAC method, in consideration of the diffusion and stagnant mobile phase mass transfer. As a result, apparent dissociation rate constants of 1.82 (±0.01)s(-1) and 3.55 (±0.37)s(-1) were estimated for acetaminophen and sertraline respectively at pH 6.8 and 25°C with multiple flow rates. Following subtraction of the non-specific binding with the support, dissociation rate constants were estimated as 1.78 (±0.00) and 1.91 (±0.02)s(-1) for acetaminophen and sertraline, respectively. These results for acetaminophen and sertraline were in good agreement with the magnitude of the rate constants for other drugs determined by capillary electrophoresis reported in the literature and the peak fitting method we performed. The method described in this work is thought to be suitable for other supramolecules, with relatively weak, fast and extensive interactions.

  5. WDM package enabling high-bandwidth optical intrasystem interconnects for high-performance computer systems

    NASA Astrophysics Data System (ADS)

    Schrage, J.; Soenmez, Y.; Happel, T.; Gubler, U.; Lukowicz, P.; Mrozynski, G.

    2006-02-01

    From long haul, metro access and intersystem links the trend goes to applying optical interconnection technology at increasingly shorter distances. Intrasystem interconnects such as data busses between microprocessors and memory blocks are still based on copper interconnects today. This causes a bottleneck in computer systems since the achievable bandwidth of electrical interconnects is limited through the underlying physical properties. Approaches to solve this problem by embedding optical multimode polymer waveguides into the board (electro-optical circuit board technology, EOCB) have been reported earlier. The principle feasibility of optical interconnection technology in chip-to-chip applications has been validated in a number of projects. For reasons of cost considerations waveguides with large cross sections are used in order to relax alignment requirements and to allow automatic placement and assembly without any active alignment of components necessary. On the other hand the bandwidth of these highly multimodal waveguides is restricted due to mode dispersion. The advance of WDM technology towards intrasystem applications will provide sufficiently high bandwidth which is required for future high-performance computer systems: Assuming that, for example, 8 wavelength-channels with 12Gbps (SDR1) each are given, then optical on-board interconnects with data rates a magnitude higher than the data rates of electrical interconnects for distances typically found at today's computer boards and backplanes can be realized. The data rate will be twice as much, if DDR2 technology is considered towards the optical signals as well. In this paper we discuss an approach for a hybrid integrated optoelectronic WDM package which might enable the application of WDM technology to EOCB.

  6. High performance dash on warning air mobile, missile system. [intercontinental ballistic missiles - systems analysis

    NASA Technical Reports Server (NTRS)

    Levin, A. D.; Castellano, C. R.; Hague, D. S.

    1975-01-01

    An aircraft-missile system which performs a high acceleration takeoff followed by a supersonic dash to a 'safe' distance from the launch site is presented. Topics considered are: (1) technological feasibility to the dash on warning concept; (2) aircraft and boost trajectory requirements; and (3) partial cost estimates for a fleet of aircraft which provide 200 missiles on airborne alert. Various aircraft boost propulsion systems were studied such as an unstaged cryogenic rocket, an unstaged storable liquid, and a solid rocket staged system. Various wing planforms were also studied. Vehicle gross weights are given. The results indicate that the dash on warning concept will meet expected performance criteria, and can be implemented using existing technology, such as all-aluminum aircraft and existing high-bypass-ratio turbofan engines.

  7. System and method for on demand, vanishing, high performance electronic systems

    SciTech Connect

    Shah, Kedar G.; Pannu, Satinderpall S.

    2016-03-22

    An integrated circuit system having an integrated circuit (IC) component which is able to have its functionality destroyed upon receiving a command signal. The system may involve a substrate with the IC component being supported on the substrate. A module may be disposed in proximity to the IC component. The module may have a cavity and a dissolving compound in a solid form disposed in the cavity. A heater component may be configured to heat the dissolving compound to a point of sublimation where the dissolving compound changes from a solid to a gaseous dissolving compound. A triggering mechanism may be used for initiating a dissolution process whereby the gaseous dissolving compound is allowed to attack the IC component and destroy a functionality of the IC component.

  8. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH-PERFORMANCE POWER SYSTEMS

    SciTech Connect

    1998-11-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2, which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, Al. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. The design of the char burner was completed during this quarter. The burner is designed for arch-firing and has a maximum capacity of 30 MMBtu/hr. This size represents a half scale version of a typical commercial burner. The burner is outfitted with

  9. A multi-layer robust adaptive fault tolerant control system for high performance aircraft

    NASA Astrophysics Data System (ADS)

    Huo, Ying

    Modern high-performance aircraft demand advanced fault-tolerant flight control strategies. Not only the control effector failures, but the aerodynamic type failures like wing-body damages often result in substantially deteriorate performance because of low available redundancy. As a result the remaining control actuators may yield substantially lower maneuvering capabilities which do not authorize the accomplishment of the air-craft's original specified mission. The problem is to solve the control reconfiguration on available control redundancies when the mission modification is urged to save the aircraft. The proposed robust adaptive fault-tolerant control (RAFTC) system consists of a multi-layer reconfigurable flight controller architecture. It contains three layers accounting for different types and levels of failures including sensor, actuator, and fuselage damages. In case of the nominal operation with possible minor failure(s) a standard adaptive controller stands to achieve the control allocation. This is referred to as the first layer, the controller layer. The performance adjustment is accounted for in the second layer, the reference layer, whose role is to adjust the reference model in the controller design with a degraded transit performance. The upmost mission adjust is in the third layer, the mission layer, when the original mission is not feasible with greatly restricted control capabilities. The modified mission is achieved through the optimization of the command signal which guarantees the boundedness of the closed-loop signals. The main distinguishing feature of this layer is the the mission decision property based on the current available resources. The contribution of the research is the multi-layer fault-tolerant architecture that can address the complete failure scenarios and their accommodations in realities. Moreover, the emphasis is on the mission design capabilities which may guarantee the stability of the aircraft with restricted post

  10. Towards a System for High-Performance, Multi-Language, Component-Based Modeling

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2008-12-01

    The Community Surface Dynamics Modeling System (CSDMS) is a recently NSF-funded project that represents an effort to bring together a diverse community of surface dynamics modelers and model users. Key goals of the CSDMS project are to (1) promote open-source code sharing and re-use, (2) to develop a review process for code contributions, (3) promote recognition of contributors, (4) develop a "library" of low- level software tools and higher-level models that can be linked as easily as possible into new applications and (5) provide resources to simplify the efforts of surface dynamics modelers. The architectural framework of CSDMS is being designed to allow code contributions to be in any of several different programming languages (language independence), to support a migration towards parallel computation and to support multiple operating systems (platform independence). After evaluating a number of different "coupling frameworks," the CSDMS project has decided to use a DOE- funded set of tools and standards called the Common Component Architecture (CCA) as the foundation for our model-linking efforts. CCA was specifically designed to meet the needs of high-performance, scientific computing. It also includes a powerful, language-interoperability tool called Babel that permits communication between components written in any of several major programming languages, including C, C++, Java, Fortran (all years) and Python. The CSDMS project has been collecting open-source components from our modeling community in all of these languages, including a variety of terrestrial, marine, coastal and hydrological models. CSDMS is now focused on the problem of how best to wrap these components with interfaces that allow them to be linked together with maximum ease and flexibility. To this end, we are adapting a Java version of the OpenMI (Open Modeling Interface) standard and an associated software development kit for use within a CCA framework. Our goal is to combine the best

  11. High performance MRI simulations of motion on multi-GPU systems

    PubMed Central

    2014-01-01

    Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer

  12. Microdialysis based monitoring of subcutaneous interstitial and venous blood glucose in Type 1 diabetic subjects by mid-infrared spectrometry for intensive insulin therapy

    NASA Astrophysics Data System (ADS)

    Heise, H. Michael; Kondepati, Venkata Radhakrishna; Damm, Uwe; Licht, Michael; Feichtner, Franz; Mader, Julia Katharina; Ellmerer, Martin

    2008-02-01

    Implementing strict glycemic control can reduce the risk of serious complications in both diabetic and critically ill patients. For this purpose, many different blood glucose monitoring techniques and insulin infusion strategies have been tested towards the realization of an artificial pancreas under closed loop control. In contrast to competing subcutaneously implanted electrochemical biosensors, microdialysis based systems for sampling body fluids from either the interstitial adipose tissue compartment or from venous blood have been developed, which allow an ex-vivo glucose monitoring by mid-infrared spectrometry. For the first option, a commercially available, subcutaneously inserted CMA 60 microdialysis catheter has been used routinely. The vascular body interface includes a double-lumen venous catheter in combination with whole blood dilution using a heparin solution. The diluted whole blood is transported to a flow-through dialysis cell, where the harvesting of analytes across the microdialysis membrane takes place at high recovery rates. The dialysate is continuously transported to the IR-sensor. Ex-vivo measurements were conducted on type-1 diabetic subjects lasting up to 28 hours. Experiments have shown excellent agreement between the sensor readout and the reference blood glucose concentration values. The simultaneous assessment of dialysis recovery rates renders a reliable quantification of whole blood concentrations of glucose and metabolites (urea, lactate etc) after taking blood dilution into account. Our results from transmission spectrometry indicate, that the developed bed-side device enables reliable long-term glucose monitoring with reagent- and calibration-free operation.

  13. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    DTIC Science & Technology

    2011-11-01

    performance- computing-at-ford-motor-company/. 19 Tom Gielda, Whirlpool’s Home Appliance Rocket Science: Design to Delivery with High Performance Computing...funnel cooling air to the rear-mounted engines. They also design the propeller blades for optimum performance using HPC CFD. If the customer...Case Study for the Council on Competitiveness, Washington, D.C.; See http://www.compete.org/oublication/detail/682/whirlpools- home - appliance -rocket

  14. Silicon photonics-based laser system for high performance fiber sensing

    NASA Astrophysics Data System (ADS)

    Ayotte, S.; Faucher, D.; Babin, A.; Costin, F.; Latrasse, C.; Poulin, M.; G.-Deschênes, É.; Pelletier, F.; Laliberté, M.

    2015-09-01

    We present a compact four-laser source based on low-noise, high-bandwidth Pound-Drever-Hall method and optical phase-locked loops for sensing narrow spectral features. Four semiconductor external cavity lasers in butterfly packages are mounted on a shared electronics control board and all other optical functions are integrated on a single silicon photonics chip. This high performance source is compact, automated, robust, operates over a wide temperature range and remains locked for days. A laser to resonance frequency noise of 0.25 Hz/rt-Hz is demonstrated.

  15. Relationships of Cognitive and Metacognitive Learning Strategies to Mathematics Achievement in Four High-Performing East Asian Education Systems

    ERIC Educational Resources Information Center

    Areepattamannil, Shaljan; Caleon, Imelda S.

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education…

  16. Constructing a LabVIEW-Controlled High-Performance Liquid Chromatography (HPLC) System: An Undergraduate Instrumental Methods Exercise

    ERIC Educational Resources Information Center

    Smith, Eugene T.; Hill, Marc

    2011-01-01

    In this laboratory exercise, students develop a LabVIEW-controlled high-performance liquid chromatography system utilizing a data acquisition device, two pumps, a detector, and fraction collector. The programming experience involves a variety of methods for interface communication, including serial control, analog-to-digital conversion, and…

  17. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  18. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    NASA Astrophysics Data System (ADS)

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-05-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.

  19. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    PubMed Central

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-01-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations. PMID:27220270

  20. Coal-fired high performance power generating system. Quarterly progress report, July 1, 1993--September 30, 1993

    SciTech Connect

    Not Available

    1993-12-31

    This report covers work carried out under Task 3, Preliminary Research and Development, and Task 4, Commercial Generating Plant Design, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x}, and particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. The report discusses progress in cycle analysis, chemical reactor modeling, ash deposition rate calculations for HITAF (high temperature advanced furnace) convective air heater, air heater materials, and deposit initiation and growth on ceramic substrates.

  1. Coal-fired high performance power generating system. Draft quarterly progress report, January 1--March 31, 1995

    SciTech Connect

    1995-10-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal-Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x} and particulates {le} 25% NSPS; cost {ge}65% of heat input; all solid wastes benign. A crucial aspect of the authors design is the integration of the gas turbine requirements with the HITAF output and steam cycle requirements. In order to take full advantage of modern highly efficient aeroderivative gas turbines they have carried out a large number of cycle calculations to optimize their commercial plant designs for both greenfield and repowering applications.

  2. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  3. Development of Nano-structured Electrode Materials for High Performance Energy Storage System

    NASA Astrophysics Data System (ADS)

    Huang, Zhendong

    Systematic studies have been done to develop a low cost, environmental-friendly facile fabrication process for the preparation of high performance nanostructured electrode materials and to fully understand the influence factors on the electrochemical performance in the application of lithium ion batteries (LIBs) or supercapacitors. For LIBs, LiNi1/3Co1/3Mn1/3O2 (NCM) with a 1D porous structure has been developed as cathode material. The tube-like 1D structure consists of inter-linked, multi-facet nanoparticles of approximately 100-500nm in diameter. The microscopically porous structure originates from the honeycomb-shaped precursor foaming gel, which serves as self-template during the stepwise calcination process. The 1D NCM presents specific capacities of 153, 140, 130 and 118mAh·g-1 at current densities of 0.1C, 0.5C, 1C and 2C, respectively. Subsequently, a novel stepwise crystallization process consisting of a higher crystallization temperature and longer period for grain growth is employed to prepare single crystal NCM nanoparticles. The modified sol-gel process followed by optimized crystallization process results in significant improvements in chemical and physical characteristics of the NCM particles. They include a fully-developed single crystal NCM with uniform composition and a porous NCM architecture with a reduced degree of fusion and a large specific surface area. The NCM cathode material with these structural modifications in turn presents significantly enhanced specific capacities of 173.9, 166.9, 158.3 and 142.3mAh·g -1 at 0.1C, 0.5C, 1C and 2C, respectively. Carbon nanotube (CNT) is used to improve the relative low power capability and poor cyclic stability of NCM caused by its poor electrical conductivity. The NCM/CNT nanocomposites cathodes are prepared through simply mixing of the two component materials followed by a thermal treatment. The CNTs were functionalized to obtain uniformly-dispersed MWCNTs in the NCM matrix. The electrochemical

  4. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    SciTech Connect

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doors for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.

  5. Hierarchical rapid modeling and simulation of high-performance picture archive and communications systems

    NASA Astrophysics Data System (ADS)

    Anderson, Kenneth R.; Meredith, Glenn; Prior, Fred W.; Wirsz, Emil; Wilson, Dennis L.

    1992-07-01

    Due to the expense and time required to configure and evaluate large scale PACS rapid modeling and simulation of system configurations is critical. The results of the analysis can be used to drive the design of both hardware and software. System designers can use the models to help them during the actual system integration. This paper will show how the LANNET 11. 5 and NE1''WORK 11. 5 modeling tools can be used hierarchically to model and simulate large PACS. The detailed description of the Communication Network model which is one of three models used for the Medical Diagnostic Imaging Support System (MDIS) design analysis will be presented. The paper will conclude with future issues in the modeling of MDIS and other large heterogeneous networks of computers and workstations. The way that the models might be used throughout the system life cycle to reduce the operation and maintenance costs of the system is explained.

  6. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  7. Design and test of high performance composite tubes for use in deep water drilling and production systems

    NASA Astrophysics Data System (ADS)

    Odru, Pierre; Massonpierre, Yves

    1987-10-01

    High performance composite tubes to be used as marine risers, in deepwater drilling or in production systems were developed. They are composed of several layers with independant functions. Structural layers made of high resistance fibers set in a resin matrix, are filament wound and consist of circumferential layers, perpendicular to the tube axis, to resist bursting stresses, and longitudinal layers, helically wound, to resist axial forces. The tubes are completed with internal and external liners and are terminated at extremities by steel end pieces to which the composite layers are carefully bonded. The concept of high performance composite tubes is described, including their end fittings. Tests were carried out to verify and improve the properties of the pipes, in ultimate conditions (burst pressure up to 170 MPa, ultimate tensile, collapse), as well as fatigue and aging. Results are satisfactory and real applications are envisaged.

  8. Multi-Core Technology for and Fault Tolerant High-Performance Spacecraft Computer Systems

    NASA Astrophysics Data System (ADS)

    Behr, Peter M.; Haulsen, Ivo; Van Kampenhout, J. Reinier; Pletner, Samuel

    2012-08-01

    The current architectural trends in the field of multi-core processors can provide an enormous increase in processing power by exploiting the parallelism available in many applications. In particular because of their high energy efficiency, it is obvious that multi-core processor-based systems will also be used in future space missions. In this paper we present the system architecture of a powerful optical sensor system based on the eight core multi-core processor P4080 from Freescale. The fault tolerant structure and the highly effective FDIR concepts implemented on different hardware and software levels of the system are described in detail. The space application scenario and thus the main requirements for the sensor system have been defined by a complex tracking sensor application for autonomous landing or docking manoeuvres.

  9. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  10. Modeling and simulation of a high-performance PACS based on a shared file system architecture

    NASA Astrophysics Data System (ADS)

    Meredith, Glenn; Anderson, Kenneth R.; Wirsz, Emil; Prior, Fred W.; Wilson, Dennis L.

    1992-07-01

    Siemens and Loral Western Development Labs have designed a Picture Archiving and Communication System capable of supporting a large, fully digital hospital. Its functions include the management, storage and retrieval of medical images. The system may be modeled as a heterogeneous network of processing elements, transfer devices and storage units. Several discrete event simulation models have been designed to investigate different levels of the design. These models include the System Model, focusing on the flow of image traffic throughout the system, the Workstation Models, focusing on the internal processing in the different types of workstations, and the Communication Network Model, focusing on the control communication and host computer processing. The first two of these models are addressed here, with reference being made to a separate paper regarding the Communication Network Model. This paper describes some of the issues addressed with the models, the modeling techniques used and the performance results from the simulations. Important parameters of interest include: time to retrieve images from different possible storage locations and the utilization levels of the transfer devices and other key hardware components. To understand system performance under fully loaded conditions, the proposed system for the Madigan Army Medical Center was modeled in detail, as part of the Medical Diagnostic Imaging Support System (MDIS) proposal.

  11. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  12. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    SciTech Connect

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  13. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    SciTech Connect

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  14. HIPERCIR: a low-cost high-performance 3D radiology image analysis system

    NASA Astrophysics Data System (ADS)

    Blanquer, Ignacio; Hernandez, Vincente; Ramirez, Javier; Vidal, Antonio M.; Alcaniz-Raya, Mariano L.; Grau Colomer, Vincente; Monserrat, Carlos A.; Concepcion, Luis; Marti-Bonmati, Luis

    1999-07-01

    Clinics have to deal currently with hundreds of 3D images a day. The processing and visualization using currently affordable systems is very costly and slow. The present work shows the features of a software integrated parallel computing package developed at the Universidad Politecnica de Valencia (UPV), under the European Project HIPERCIR, which is aimed at reducing the time and requirements for processing and visualizing the 3D images with low-cost solutions, such as networks of PCs running standard operating systems. HIPERCIR is targeted to Radiology Departments of Hospitals and Radiology System Providers to provide them with a tool for easing the day-to-day diagnosis. This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Computing Systems Department of the UPV, experts on biomedical software and radiology and tomography clinic experts.

  15. Analytical design of a high performance stability and control augmentation system for a hingeless rotor helicopter

    NASA Technical Reports Server (NTRS)

    Miyajima, K.

    1978-01-01

    A stability and control augmentation system (SCAS) was designed based on a set of comprehensive performance criteria. Linear optimal control theory was applied to determine appropriate feedback gains for the stability augmentation system (SAS). The helicopter was represented by six-degree-of-freedom rigid body equations of motion and constant factors were used as weightings for state and control variables. The ratio of these factors was employed as a parameter for SAS analysis and values of the feedback gains were selected on this basis to satisfy three of the performance criteria for full and partial state feedback systems. A least squares design method was then applied to determine control augmentation system (CAS) cross feed gains to satisfy the remaining seven performance criteria. The SCAS gains were then evaluated by nine degree-of-freedom equations which include flapping motion and conclusions drawn concerning the necessity of including the pitch/regressing and roll/regressing modes in SCAS analyses.

  16. High performance file compression algorithm for video-on-demand e-learning system

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2005-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene: recognizing the a lecturer and a lecture stick by pattern recognition techniques, the video-image compression processing system deletes the figure of a lecturer of low importance and displays only the end point of a lecture stick. It enables us to create the highly compressed lecture video files, which are suitable for the Internet distribution. We compare this technique with the other simple methods such as the lower frame-rate video files, and the ordinary MPEG files. The experimental result shows that the proposed compression processing system is much more effective than the others.

  17. High-performance sub-terahertz transmission imaging system for food inspection

    PubMed Central

    Ok, Gyeongsik; Park, Kisang; Chun, Hyang Sook; Chang, Hyun-Joo; Lee, Nari; Choi, Sung-Wook

    2015-01-01

    Unlike X-ray systems, a terahertz imaging system can distinguish low-density materials in a food matrix. For applying this technique to food inspection, imaging resolution and acquisition speed ought to be simultaneously enhanced. Therefore, we have developed the first continuous-wave sub-terahertz transmission imaging system with a polygonal mirror. Using an f-theta lens and a polygonal mirror, beam scanning is performed over a range of 150 mm. For obtaining transmission images, the line-beam is incorporated with sample translation. The imaging system demonstrates that a pattern with 2.83 mm line-width at 210 GHz can be identified with a scanning speed of 80 mm/s. PMID:26137392

  18. An ultralightweight, evacuated, load-bearing, high-performance insulation system. [for cryogenic propellant tanks

    NASA Technical Reports Server (NTRS)

    Parmley, R. T.; Cunnington, G. R., Jr.

    1978-01-01

    A new hollow-glass microsphere insulation and a flexible stainless-steel vacuum jacket were demonstrated on a flight-weight cryogenic test tank, 1.17 m in diameter. The weight of the system is three times lighter than the most advanced vacuum-jacketed design demonstrated to date, a free-standing honeycomb hard shell with a multilayer insulation system (for a Space Tug application). Design characteristics of the flexible vacuum jacket are presented along with a model describing the insulation thermal performance as a function of boundary temperatures and emittance, compressive load on the insulation and insulation gas pressure. Test data are compared with model predictions and with prior flat-plate calorimeter test results. Potential applications for this insulation system or a derivative of this system include the cryogenic Space Tug, the Single-Stage-to-Orbit Space Shuttle, LH2 fueled subsonic and hypersonic aircraft, and LNG applications.

  19. A high-performance miniaturized time division multiplexed sensor system for remote structural health monitoring

    NASA Astrophysics Data System (ADS)

    Lloyd, Glynn D.; Everall, Lorna A.; Sugden, Kate; Bennion, Ian

    2004-09-01

    We report for the first time the design, implementation and commercial application of a hand-held optical time division multiplexed, distributed fibre Bragg grating sensor system. A unique combination of state-of-the art electronic and optical components enables system miniaturization whilst maintaining exceptional performance. Supporting more than 100 low-cost sensors per channel, the battery-powered system operates remotely via a wireless GSM link, making it ideal for real-time structural health monitoring in harsh environments. Driven by highly configurable timing electronics, an off-the-shelf telecommunications semiconductor optical amplifier performs combined amplification and gating. This novel optical configuration boasts a spatial resolution of less than 20cm and an optical signal to noise ratio of better than 30dB, yet utilizes sensors with reflectivity of only a few percent and does not require RF speed signal processing devices. This paper highlights the performance and cost advantages of a system that utilizes TDM-style mass manufactured commodity FBGs. Created in continual lengths, these sensors reduce stock inventory, eradicate application-specific array design and simplify system installation and expansion. System analysis from commercial installations in oil exploration, wind energy and vibration measurement will be presented, with results showing kilohertz interrogation speed and microstrain resolution.

  20. Energy Performance Testing of Asetek's RackCDU System at NREL's High Performance Computing Data Center

    SciTech Connect

    Sickinger, D.; Van Geet, O.; Ravenscroft, C.

    2014-11-01

    In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89 degrees F, and 48% at outlet temperatures approaching 100 degrees F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks.

  1. Design and Integration for High Performance Robotic Systems Based on Decomposition and Hybridization Approaches

    PubMed Central

    Zhang, Dan; Wei, Bin

    2017-01-01

    Currently, the uses of robotics are limited with respect to performance capabilities. Improving the performance of robotic mechanisms is and still will be the main research topic in the next decade. In this paper, design and integration for improving performance of robotic systems are achieved through three different approaches, i.e., structure synthesis design approach, dynamic balancing approach, and adaptive control approach. The purpose of robotic mechanism structure synthesis design is to propose certain mechanism that has better kinematic and dynamic performance as compared to the old ones. For the dynamic balancing design approach, it is normally accomplished based on employing counterweights or counter-rotations. The potential issue is that more weight and inertia will be included in the system. Here, reactionless based on the reconfiguration concept is put forward, which can address the mentioned problem. With the mechanism reconfiguration, the control system needs to be adapted thereafter. One way to address control system adaptation is by applying the “divide and conquer” methodology. It entails modularizing the functionalities: breaking up the control functions into small functional modules, and from those modules assembling the control system according to the changing needs of the mechanism. PMID:28075360

  2. Building America Best Practices Series, Volume 6: High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Ruiz, Kathleen A.; Steward, Heidi E.; Love, Pat M.

    2007-06-04

    This guide is was written by PNNL for the US Department of Energy's Building America program to provide information for residential production builders interested in building near zero energy homes. The guide provides indepth descriptions of various roof-top photovoltaic power generating systems for homes. The guide also provides extensive information on various designs of solar thermal water heating systems for homes. The guide also provides construction company owners and managers with an understanding of how solar technologies can be added to their homes in a way that is cost effective, practical, and marketable. Twelve case studies provide examples of production builders across the United States who are building energy-efficient homes with photovoltaic or solar water heating systems.

  3. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  4. A High Performance Sample Delivery System for Closed-Path Eddy Covariance Measurements

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Leggett, Graham; Alstad, Karrin; Wahl, Edward

    2016-04-01

    The Picarro G2311-f Cavity Ring-Down Spectrometer (CRDS) measures CO2, CH4 and water vapor at high frequency with parts-per-billion (ppb) sensitivity for eddy covariance, gradient, eddy accumulation measurements. In flux mode, the analyzer measures the concentration of all three species at 10 Hz with a cavity gas exchange time of 5 Hz. We developed an enhanced pneumatic sample delivery system for drawing air from the atmosphere into the cavity. The new sample delivery system maintains a 5 Hz gas exchange time, and allows for longer sample intake lines to be configured in tall tower applications (> 250 ft line at sea level). We quantified the system performance in terms of vacuum pump head room and 10-90% concentration step response for several intake line lengths at various elevations. Sample eddy covariance data are shown from an alfalfa field in Northern California, USA.

  5. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  6. Building High-Performing and Improving Education Systems: Quality Assurance and Accountability. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    Monitoring, evaluation, and quality assurance in their various forms are seen as being one of the foundation stones of high-quality education systems. De Grauwe, writing about "school supervision" in four African countries in 2001, linked the decline in the quality of basic education to the cut in resources for supervision and support.…

  7. A bioinspired, reusable, paper-based system for high-performance large-scale evaporation.

    PubMed

    Liu, Yanming; Yu, Shengtao; Feng, Rui; Bernard, Antoine; Liu, Yang; Zhang, Yao; Duan, Haoze; Shang, Wen; Tao, Peng; Song, Chengyi; Deng, Tao

    2015-05-06

    A bioinspired, reusable, paper-based gold-nanoparticle film is fabricated by depositing an as-prepared gold-nanoparticle thin film on airlaid paper. This paper-based system with enhanced surface roughness and low thermal conductivity exhibits increased efficiency of evaporation, scale-up potential, and proven reusability. It is also demonstrated to be potentially useful in seawater desalination.

  8. Aim Higher: Lofty Goals and an Aligned System Keep a High Performer on Top

    ERIC Educational Resources Information Center

    McCommons, David P.

    2014-01-01

    Every school district is feeling the pressure to ensure higher academic achievement for all students. A focus on professional learning for an administrative team not only improves student learning and achievement, but also assists in developing a systemic approach for continued success. This is how the Fox Chapel Area School District in…

  9. Knowledge Work Supervision: Transforming School Systems into High Performing Learning Organizations.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    1997-01-01

    This article describes a new supervision model conceived to help a school system redesign its anatomy (structures), physiology (flow of information and webs of relationships), and psychology (beliefs and values). The new paradigm (Knowledge Work Supervision) was constructed by reviewing the practices of several interrelated areas: sociotechnical…

  10. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  11. Analysis of a magnetically suspended, high-performance instrument pointing system

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1978-01-01

    This paper describes a highly accurate auxiliary instrument pointing system which can provide fine pointing for a variety of solar-, stellar-, and Earth-viewing scientific instruments during extended space shuttle orbital missions. This system, called the Annular Suspension and Pointing System (ASPS), consists of pointing assemblies for coarse and vernier pointing. The 'coarse' assembly is attached to the spacecraft (e.g., the space shuttle) and consists of an elevation gimbal and a lateral gimbal to provide coarse pointing. The vernier pointing assembly consists of the payload instrument mounted on a plate around which is attached a continuous annular rim. The vernier assembly is suspended in the lateral gimbal using magnetic actuators which provide rim suspension forces and fine pointing torques. A detailed linearized mathematical model is developed for the ASPS/space shuttle system, and control laws and payload attitude state estimators are designed. Statistical pointing performance is predicted in the presence of stochastic disturbances such as crew motion, sensor noise, and actuator noise.

  12. Rewarding high performers--the pay-for-performance system at Heritage Dental Center.

    PubMed

    Kohen, J

    2001-01-01

    As it has in many businesses in the U.S., a system rewarding exceptional performance has proven to be successful in the dental industry. The effective staff incentive program in place at Heritage Dental Center provides benefits for all members of the dental team.

  13. Isolation, pointing, and suppression (IPS) system for high-performance spacecraft

    NASA Astrophysics Data System (ADS)

    Hindle, Tim; Davis, Torey; Fischer, Jim

    2007-04-01

    Passive mechanical isolation is often times the first step taken to remedy vibration issues on-board a spacecraft. In many cases, this is done with a hexapod of axial members or struts to obtain the desired passive isolation in all six degrees-of-freedom (DOF). In some instances, where the disturbance sources are excessive or the payload is particularly sensitive to vibration, additional steps are taken to improve the performance beyond that of passive isolation. Additional performance or functionality can be obtained with the addition of active control, using a hexapod of hybrid (passive/active) elements at the interface between the payload and the bus. This paper describes Honeywell's Isolation, Pointing, and Suppression (IPS) system. It is a hybrid isolation system designed to isolate a sensitive spacecraft payload with very low passive resonant break frequencies while affording agile independent payload pointing, on-board payload disturbance rejection, and active isolation augmentation. This system is an extension of the work done on Honeywell's previous Vibration Isolation, Steering, and Suppression (VISS) flight experiment. Besides being designed for a different size payload than VISS, the IPS strut includes a dual-stage voice coil design for improved dynamic range as well as improved low-noise drive electronics. In addition, the IPS struts include integral load cells, gap sensors, and payloadside accelerometers for control and telemetry purposes. The associated system-level control architecture to accomplish these tasks is also new for this program as compared to VISS. A summary of the IPS system, including analysis and hardware design, build, and single axis bipod testing will be reviewed.

  14. Building-Wide, Adaptive Energy Management Systems for High-Performance Buildings: Final CRADA Report

    SciTech Connect

    Zavala, Victor M.

    2016-10-27

    Development and field demonstration of the minimum ratio policy for occupancy-driven, predictive control of outdoor air ventilation. Technology transfer of Argonne’s methods for occupancy estimation and forecasting and for M&V to BuildingIQ for their deployment. Selection of CO2 sensing as the currently best-available technology for occupancy-driven controls. Accelerated restart capability for the commercial BuildingIQ system using horizon shifting strategies applied to receding horizon optimal control problems. Empirical-based evidence of 30% chilled water energy savings and 22% total HVAC energy savings achievable with the BuildingIQ system operating in the APS Office Building on-site at Argonne.

  15. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  16. Advanced Concurrent Interfaces for High-Performance Multi-Media Distributed C3 Systems

    DTIC Science & Technology

    1993-03-01

    Adelson, Associate Professor of Visual Sciences and co-Director (with Dr. Alex Pentland) of the Lab’s Vision Science Group. Walter Bender , Principal...Graphics and Animation Group. 8 PRINT Q -ALITY MAPS Relevant Personnel: Work under this topic was conducted by Walter Bender under the general supervision of...real-time input to the system, as well as more rapid response when reconstituting the video. References/Publications/Theses Bender , Walter and Robert

  17. Chaining for Flexible and High-Performance Key-Value Systems

    DTIC Science & Technology

    2012-09-01

    esis, MIT, February 2008. [Cited on pages 10, 91 and 92.] [76] SumanNath and Phillip B. Gibbons . Onlinemaintenance of very large random sam- ples on...Terry, Marvin M. eimer, and Alan J. Demers. Flexible update propagation for weakly consistent replication. In Proceed- ings of the Sixteenth ACM...Petersen, Alan J. Demers, Mike J. Spre- itzer, and Carl H. Hauser. Managing update con icts in bayou, a weakly connected replicated storage system. In

  18. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  19. Novel digital logic gate for high-performance CMOS imaging system

    NASA Astrophysics Data System (ADS)

    Chung, Hoon H.; Joo, Youngjoong

    2004-06-01

    In these days, the CMOS image sensors are commonly used in many low resolution applications because the CMOS imaging system has several advantages against the conventional CCD imaging system. However, there are still several problems for the realization of the single-chip CMOS imaging system. One main problem is the substrate coupling noise, which is caused by the digital switching noise. Because the CMOS image sensors share the same substrate with surrounding digital circuit, it is difficult for the CMOS image sensor to get a good performance. In order to investigate the substrate coupling noise effect of the CMOS image sensor, the conventional CMOS logic, C-CBL (Complementary-Current balanced logic) and proposed low switching noise logic are simulated and compared. Consequently, the proposed logic compensates not only the large digital switching noise of conventional CMOS logic ,but also the huge power consumption of the C-CBL. Both the total instantaneous current behaviors on the power supply and the peak-to-peak voltages of the substrate voltage variation (di/dt noise) are investigated. The simulation is performed by AMI 0.5μm CMOS technology.

  20. A biolized, compact, low noise, high performance implantable electromechanical ventricular assist system.

    PubMed

    Sasaki, T; Takatani, S; Shiono, M; Sakuma, I; Noon, G P; Nosé, Y; DeBakey, M E

    1991-01-01

    An implantable electromechanical ventricular assist system (VAS) intended for permanent human use was developed. It consisted of a conically shaped pumping chamber, a polyolefin (Hexsyn) rubber diaphragm attached to a pusher-plate, and a compact actuator with a direct current brushless motor and a planetary rollerscrew. The outer diameter was 97 mm, and the total thickness was 70 mm. This design was chosen to give a stroke volume of 63 ml. The device weighs 620 g, with a total volume of 360 ml. The pump can provide 8 L/min flow against 120 mmHg afterload with a preload of 10 mmHg. The inner surface of the device, including the pumping chamber and diaphragm, was made biocompatible with a dry gelatin coating. To date, two subacute (2 and 6 day) calf studies have been conducted. The pump showed reasonable anatomic fit inside the left thorax, and the entire system functioned satisfactorily in both the fill-empty mode using the Hall effect sensor signals and the conventional fixed rate mode. There were no thromboembolic complications despite no anticoagulation therapy. The system now is being endurance tested greater than 10 weeks (9 million cycles). This VAS is compact, low noise, easy to control, and has excellent biocompatibility.

  1. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    PubMed Central

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center. Images PMID:7703940

  2. High performance 3-coil wireless power transfer system for the 512-electrode epiretinal prosthesis.

    PubMed

    Zhao, Yu; Nandra, Mandheerej; Yu, Chia-Chen; Tai, Yu-chong

    2012-01-01

    The next-generation retinal prostheses feature high image resolution and chronic implantation. These features demand the delivery of power as high as 100 mW to be wireless and efficient. A common solution is the 2-coil inductive power link, used by current retinal prostheses. This power link tends to include a larger-size extraocular receiver coil coupled to the external transmitter coil, and the receiver coil is connected to the intraocular electrodes through a trans-sclera trans-choroid cable. In the long-term implantation of the device, the cable may cause hypotony (low intraocular pressure) and infection. However, when a 2-coil system is constructed from a small-size intraocular receiver coil, the efficiency drops drastically which may induce over heat dissipation and electromagnetic field exposure. Our previous 2-coil system achieved only 7% power transfer. This paper presents a fully intraocular and highly efficient wireless power transfer system, by introducing another inductive coupling link to bypass the trans-sclera trans-choroid cable. With the specific equivalent load of our customized 512-electrode stimulator, the current 3-coil inductive link was measured to have the overall power transfer efficiency around 36%, with 1-inch separation in saline. The high efficiency will favorably reduce the heat dissipation and electromagnetic field exposure to surrounding human tissues. The effect of the eyeball rotation on the power transfer efficiency was investigated as well. The efficiency can still maintain 14.7% with left and right deflection of 30 degree during normal use. The surgical procedure for the coils' implantation into the porcine eye was also demonstrated.

  3. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    PubMed

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-02

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  4. High Performance Fuel Cell and Electrolyzer Membrane Electrode Assemblies (MEAs) for Space Energy Storage Systems

    NASA Technical Reports Server (NTRS)

    Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.

    2012-01-01

    Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.

  5. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    SciTech Connect

    Wu, Chase Qishi

    2016-12-01

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink data flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to

  6. High Performance CMOS Light Detector with Dark Current Suppression in Variable-Temperature Systems.

    PubMed

    Lin, Wen-Sheng; Sung, Guo-Ming; Lin, Jyun-Long

    2016-12-23

    This paper presents a dark current suppression technique for a light detector in a variable-temperature system. The light detector architecture comprises a photodiode for sensing the ambient light, a dark current diode for conducting dark current suppression, and a current subtractor that is embedded in the current amplifier with enhanced dark current cancellation. The measured dark current of the proposed light detector is lower than that of the epichlorohydrin photoresistor or cadmium sulphide photoresistor. This is advantageous in variable-temperature systems, especially for those with many infrared light-emitting diodes. Experimental results indicate that the maximum dark current of the proposed current amplifier is approximately 135 nA at 125 °C, a near zero dark current is achieved at temperatures lower than 50 °C, and dark current and temperature exhibit an exponential relation at temperatures higher than 50 °C. The dark current of the proposed light detector is lower than 9.23 nA and the linearity is approximately 1.15 μA/lux at an external resistance RSS = 10 kΩ and environmental temperatures from 25 °C to 85 °C.

  7. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    SciTech Connect

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  8. Metal-based anode for high performance bioelectrochemical systems through photo-electrochemical interaction

    NASA Astrophysics Data System (ADS)

    Liang, Yuxiang; Feng, Huajun; Shen, Dongsheng; Long, Yuyang; Li, Na; Zhou, Yuyang; Ying, Xianbin; Gu, Yuan; Wang, Yanfeng

    2016-08-01

    This paper introduces a novel composite anode that uses light to enhance current generation and accelerate biofilm formation in bioelectrochemical systems. The composite anode is composed of 316L stainless steel substrate and a nanostructured α-Fe2O3 photocatalyst (PSS). The electrode properties, current generation, and biofilm properties of the anode are investigated. In terms of photocurrent, the optimal deposition and heat-treatment times are found to be 30 min and 2 min, respectively, which result in a maximum photocurrent of 0.6 A m-2. The start-up time of the PSS is 1.2 days and the maximum current density is 2.8 A m-2, twice and 25 times that of unmodified anode, respectively. The current density of the PSS remains stable during 20 days of illumination. Confocal laser scanning microscope images show that the PSS could benefit biofilm formation, while electrochemical impedance spectroscopy indicates that the PSS reduce the charge-transfer resistance of the anode. Our findings show that photo-electrochemical interaction is a promising way to enhance the biocompatibility of metal anodes for bioelectrochemical systems.

  9. Design of high performance multivariable control systems for supermaneuverable aircraft at high angle of attack

    NASA Technical Reports Server (NTRS)

    Valavani, Lena

    1995-01-01

    The main motivation for the work under the present grant was to use nonlinear feedback linearization methods to further enhance performance capabilities of the aircraft, and robustify its response throughout its operating envelope. The idea was to use these methods in lieu of standard Taylor series linearization, in order to obtain a well behaved linearized plant, in its entire operational regime. Thus, feedback linearization was going to constitute an 'inner loop', which would then define a 'design plant model' to be compensated for robustness and guaranteed performance in an 'outer loop' application of modern linear control methods. The motivation for this was twofold; first, earlier work had shown that by appropriately conditioning the plant through conventional, simple feedback in an 'inner loop', the resulting overall compensated plant design enjoyed considerable enhancement of performance robustness in the presence of parametric uncertainty. Second, the nonlinear techniques did not have any proven robustness properties in the presence of unstructured uncertainty; a definition of robustness (and performance) is very difficult to achieve outside the frequency domain; to date, none is available for the purposes of control system design. Thus, by proper design of the outer loop, such properties could still be 'injected' in the overall system.

  10. High performance electrophoresis system for site-specific entrapment of nanoparticles in a nanoarray

    NASA Astrophysics Data System (ADS)

    Han, Jin-Hee; Lakshmana, Sudheendra; Kim, Hee-Joo; Hass, Elizabeth A.; Gee, Shirley; Hammock, Bruce D.; Kennedy, Ian

    2010-02-01

    A nanoarray, integrated with an electrophoretic system, was developed to trap nanoparticles into their corresponding nanowells. This nanoarray overcomes the complications of losing the function and activity of the protein binding to the surface in conventional microarrays by using minimum amounts of sample. The nanoarray is also superior to other biosensors that use immunoassays in terms of lowering the limit of detection to the femto- or atto-molar level. In addition, our electrophoretic particle entrapment system (EPES) is able to effectively trap the nanoparticles using a low trapping force for a short duration. Therefore, good conditions for biological samples conjugated with particles can be maintained. The channels were patterned onto a bi-layer consisting of a PMMA and LOL coating on conductive indium tin oxide (ITO)-coated glass slide by using e-beam lithography. The suspensions of 170 nm-nanoparticles then were added to the chip that was connected to a positive voltage. On top of the droplet, another ITO-coated-glass slide was covered and connected to a ground terminal. Negatively charged fluorescent nanoparticles (blue emission) were selectively trapped onto the ITO surface at the bottom of the wells by following electric field lines. Numerical modeling was performed by using commercially available software, COMSOL Multiphysics to provide better understanding about the phenomenon of electrophoresis in a nanoarray. Simulation results are also useful for optimally designing a nanoarray for practical applications.

  11. A high-performance multilane microdevice system designed for the DNA forensics laboratory.

    PubMed

    Goedecke, Nils; McKenna, Brian; El-Difrawy, Sameh; Carey, Loucinda; Matsudaira, Paul; Ehrlich, Daniel

    2004-06-01

    We report preliminary testing of "GeneTrack", an instrument designed for the specific application of multiplexed short tandem repeat (STR) DNA analysis. The system supports a glass microdevice with 16 lanes of 20 cm effective length and double-T cross injectors. A high-speed galvanometer-scanned four-color detector was specially designed to accommodate the high elution rates on the microdevice. All aspects of the system were carefully matched to practical crime lab requirements for rapid reproducible analysis of crime-scene DNA evidence in conjunction with the United States DNA database (CODIS). Statistically significant studies demonstrate that an absolute, three-sigma, peak accuracy of 0.4-0.9 base pair (bp) can be achieved for the CODIS 13-locus multiplex, utilizing a single channel per sample. Only 0.5 microL of PCR product is needed per lane, a significant reduction in the consumption of costly chemicals in comparison to commercial capillary machines. The instrument is also designed to address problems in temperature-dependent decalibration and environmental sensitivity, which are weaknesses of the commercial capillary machines for the forensics application.

  12. High Performance CMOS Light Detector with Dark Current Suppression in Variable-Temperature Systems

    PubMed Central

    Lin, Wen-Sheng; Sung, Guo-Ming; Lin, Jyun-Long

    2016-01-01

    This paper presents a dark current suppression technique for a light detector in a variable-temperature system. The light detector architecture comprises a photodiode for sensing the ambient light, a dark current diode for conducting dark current suppression, and a current subtractor that is embedded in the current amplifier with enhanced dark current cancellation. The measured dark current of the proposed light detector is lower than that of the epichlorohydrin photoresistor or cadmium sulphide photoresistor. This is advantageous in variable-temperature systems, especially for those with many infrared light-emitting diodes. Experimental results indicate that the maximum dark current of the proposed current amplifier is approximately 135 nA at 125 °C, a near zero dark current is achieved at temperatures lower than 50 °C, and dark current and temperature exhibit an exponential relation at temperatures higher than 50 °C. The dark current of the proposed light detector is lower than 9.23 nA and the linearity is approximately 1.15 μA/lux at an external resistance RSS = 10 kΩ and environmental temperatures from 25 °C to 85 °C. PMID:28025530

  13. How to polarise all neutrons in one beam: a high performance polariser and neutron transport system

    NASA Astrophysics Data System (ADS)

    Rodriguez, D. Martin; Bentley, P. M.; Pappas, C.

    2016-09-01

    Polarised neutron beams are used in disciplines as diverse as magnetism,soft matter or biology. However, most of these applications often suffer from low flux also because the existing neutron polarising methods imply the filtering of one of the spin states, with a transmission of 50% at maximum. With the purpose of using all neutrons that are usually discarded, we propose a system that splits them according to their polarisation, flips them to match the spin direction, and then focuses them at the sample. Monte Carlo (MC) simulations show that this is achievable over a wide wavelength range and with an outstanding performance at the price of a more divergent neutron beam at the sample position.

  14. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF

  15. TheSNPpit—A High Performance Database System for Managing Large Scale SNP Data

    PubMed Central

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  16. TheSNPpit-A High Performance Database System for Managing Large Scale SNP Data.

    PubMed

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  17. Coal-fired high performance power generating system. Quarterly progress report, October 1--December 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    Our team has outlined a research plan based on an optimized analysis of a 250 MWe combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FUTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The Cycle Optimization effort under Task 2 outlines the evolution of our designs. The basic combined cycle approach now includes exhaust gas recirculation to quench the flue gas before it enters the convective air heater. By selecting the quench gas from a downstream location it will be clean enough and cool enough (ca. 300F) to be driven by a commercially available fan and still minimize the volume of the convective air heater. Further modeling studies on the long axial flame, under Task 3, have demonstrated that this configuration is capable of providing the necessary energy flux to the radiant air panels. This flame with its controlled mixing constrains the combustion to take place in a fuel rich environment, thus minimizing the NO{sub x} production. Recent calculations indicate that the NO{sub x} produced is low enough that the SNCR section can further reduce it to within the DOE goal of 0. 15 lbs/MBTU of fuel input. Also under Task 3 the air heater design optimization continued.

  18. Microvalve Enabled Digital Microfluidic Systems for High Performance Biochemical and Genetic Analysis.

    PubMed

    Jensen, Erik C; Zeng, Yong; Kim, Jungkyu; Mathies, Richard A

    2010-12-01

    Microfluidic devices offer unparalleled capability for digital microfluidic automation of sample processing and complex assay protocols in medical diagnostic and research applications. In our own work, monolithic membrane valves have enabled the creation of two platforms that precisely manipulate discrete, nanoliter-scale volumes of sample. The digital microfluidic Automaton uses two-dimensional microvalve arrays to combinatorially process nanoliter-scale sample volumes. This programmable system enables rapid integration of diverse assay protocols using a universal processing architecture. Microfabricated emulsion generator array (MEGA) devices integrate actively controlled 3-microvalve pumps to enable on-demand generation of uniform droplets for statistical encapsulation of microbeads and cells. A MEGA device containing 96 channels confers the capability of generating up to 3.4 × 10(6) nanoliter-volume droplets per hour for ultrahigh-throughput detection of rare mutations in a vast background of normal genotypes. These novel digital microfluidic platforms offer significant enhancements in throughput, sensitivity, and programmability for automated sample processing and analysis.

  19. Numerical simulation of the convective heat transfer on high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.

    2016-10-01

    In this work, we consider a coupled system of equations for the convective heat transfer and flow problems, which describes the processes of the natural or forced convection in some bounded area. Mathematical model include the Navier-Stokes equation for flow and the heat transfer equation for the heat transfer. Numerical implementation is based on the finite element method, which allows to take into account the complex geometry of the modeled objects. For numerical stabilization of the convective heat transfer equation for high Peclet numbers, we use streamline upwinding Petrov-Galerkin (SUPG) method. The results of the numerical simulations are presented for the 2D formulation. As the test problems, we consider the flow and heat transfer problems in technical construction under the conditions of heat sources and influence of air temperature. We couple this formulation with heat transfer problem in the surrounding grounds and investigate the influence of the technical construction to the ground in condition of the permafrost and the influence of the grounds to the temperature distribution in the construction. Numerical computation are performed on the computational cluster of the North-Eastern Federal University.

  20. Architecture of a high-performance PACS based on a shared file system

    NASA Astrophysics Data System (ADS)

    Glicksman, Robert A.; Wilson, Dennis L.; Perry, John H.; Prior, Fred W.

    1992-07-01

    The Picture Archive and Communication System developed by Loral Western Development Laboratories and Siemens Gammasonics Incorporated utilizes an advanced, high speed, fault tolerant image file server or Working Storage Unit (WSU) combined with 100 Mbit per second fiber optic data links. This central shared file server is capable of supporting the needs of more than one hundred workstations and acquisition devices at interactive rates. If additional performance is required, additional working storage units may be configured in a hyper-star topology. Specialized processing and display hardware is used to enhance Apple Macintosh personal computers to provide a family of low cost, easy to use, yet extremely powerful medical image workstations. The Siemens LiteboxTM application software provides a consistent look and feel to the user interface of all workstation in the family. Modern database and wide area communications technologies combine to support not only large hospital PACS but also outlying clinics and smaller facilities. Basic RIS functionality is integrated into the PACS database for convenience and data integrity.

  1. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans Joachim; Duffy, Donald R.

    1989-01-01

    A summary is presented of the concentrator conceptual design work performed under a NASA-funded project. The design study centers around two basic efforts: conceptual design of a self-deploying, high-performance parabolic concentrator; and materials selection for a lightweight, shape-stable concentrator. The primary structural material selected for the concentrator is PEEK/carbon fiber composite. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes a circular shape with a void in the center. The deployable solar concentrator concept is applicable to a range of solar dynamic power systems of 25 kWe to more than 75 kWe.

  2. Kinetic study on external mass transfer in high performance liquid chromatography system.

    PubMed

    Miyabe, Kanji; Kawaguchi, Yuuki; Guiochon, Georges

    2010-04-30

    External mass transfer coefficients (k(f)) were measured for a column packed with fully porous C(18)-silica spherical particles (50.6 microm in diameter), eluted with a methanol/water mixture (70/30, v/v). The pulse response and the peak-parking methods were used. Profiles of elution peaks of alkylbenzene homologues were recorded at flow rates between 0.2 and 2.0 mL min(-1). Peak-parking experiments were conducted under the same conditions, to measure intraparticle and pore diffusivity, and surface diffusion coefficients. Finally, the values of k(f) for these compounds at 298 K were derived from the first and second moments of the elution peaks by subtracting the contribution of intraparticle diffusion to band broadening. As a result, the Sherwood number (Sh) was measured under such conditions that the Reynolds (Re) and the Schmidt numbers (Sc) varied from 0.004 to 0.05 and from 1.8x10(3) to 2.7x10(3), respectively. We found that Sh is proportional to Re(alpha) and Sc(beta) and that the correlation between these three nondimensional parameters is almost the same as those given by conventional literature equations. The values of alpha and beta were close to those in the literature correlations, between 0.26 and 0.41 and between 0.31 and 0.36, respectively. The use of the Wilson-Geankoplis equation to estimate k(f) values entails a relative error of ca. 15%. So, conventional literature correlations provide correct estimates of k(f) in HPLC systems, even for particle sizes of the order of a micrometer.

  3. Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011.

    SciTech Connect

    Wheeler, Kyle Bruce; Naegle, John Hunt; Wright, Brian J.; Benner, Robert E., Jr.; Shelburg, Jeffrey Scott; Pearson, David Benjamin; Johnson, Joshua Alan; Onunkwo, Uzoma A.; Zage, David John; Patel, Jay S.

    2011-09-01

    This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

  4. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  5. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  6. Application Characterization at Scale: Lessons learned from developing a distributed Open Community Runtime system for High Performance Computing

    SciTech Connect

    Landwehr, Joshua B.; Suetterlein, Joshua D.; Marquez, Andres; Manzano Franco, Joseph B.; Gao, Guang R.

    2016-05-16

    Since 2012, the U.S. Department of Energy’s X-Stack program has been developing solutions including runtime systems, programming models, languages, compilers, and tools for the Exascale system software to address crucial performance and power requirements. Fine grain programming models and runtime systems show a great potential to efficiently utilize the underlying hardware. Thus, they are essential to many X-Stack efforts. An abundant amount of small tasks can better utilize the vast parallelism available on current and future machines. Moreover, finer tasks can recover faster and adapt better, due to a decrease in state and control. Nevertheless, current applications have been written to exploit old paradigms (such as Communicating Sequential Processor and Bulk Synchronous Parallel processing). To fully utilize the advantages of these new systems, applications need to be adapted to these new paradigms. As part of the applications’ porting process, in-depth characterization studies, focused on both application characteristics and runtime features, need to take place to fully understand the application performance bottlenecks and how to resolve them. This paper presents a characterization study for a novel high performance runtime system, called the Open Community Runtime, using key HPC kernels as its vehicle. This study has the following contributions: one of the first high performance, fine grain, distributed memory runtime system implementing the OCR standard (version 0.99a); and a characterization study of key HPC kernels in terms of runtime primitives running on both intra and inter node environments. Running on a general purpose cluster, we have found up to 1635x relative speed-up for a parallel tiled Cholesky Kernels on 128 nodes with 16 cores each and a 1864x relative speed-up for a parallel tiled Smith-Waterman kernel on 128 nodes with 30 cores.

  7. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  8. Relationships of cognitive and metacognitive learning strategies to mathematics achievement in four high-performing East Asian education systems.

    PubMed

    Areepattamannil, Shaljan; Caleon, Imelda S

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education systems, memorization strategies were negatively associated with mathematics achievement, whereas control strategies were positively associated with mathematics achievement. However, the association between elaboration strategies and mathematics achievement was a mixed bag. In Shanghai-China and Korea, elaboration strategies were not associated with mathematics achievement. In Hong Kong-China and Singapore, on the other hand, elaboration strategies were negatively associated with mathematics achievement. Implications of these findings are briefly discussed.

  9. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  10. Small Delay and High Performance AD/DA Converters of Lease Circuit System for AM&FM Broadcast

    NASA Astrophysics Data System (ADS)

    Takato, Kenji; Suzuki, Dai; Ishii, Takashi; Kobayashi, Masato; Yamada, Hirokazu; Amano, Shigeru

    Many AM&FM broadcasting stations in Japan are connected by the leased circuit system of NTT. Small delay and high performance AD/DA converter was developed for the system. The system was designed based on ITU-T J.41 Recommendation (384kbps), the transmission signal is 11bit-32 kHz where the Gain-frequency characteristics between 40Hz to 15kHz have to be quite flat. The ΔΣAD/DA converter LSIs for audio application in the market today realize very high performance. However the performance is not enough for the leased circuit system. We found that it is not possible to meet the delay and Gain-frequency requirements only by using ΔΣAD/DA converter LSI in normal operation, because 15kHz the highest frequency and 16kHz Nyquist frequency are too close, therefore there are aliasing around Nyquist frequency. In this paper, we designed AD/DA architecture having small delay (1msec) and sharp cut off LPF (100dB attenuation at 16kHz, and 1500dB/Oct from 15kHz to 16kHz) by operating ΔΣAD/DA converter LSIs over-sampling rate such as 128kHz and by adding custom LPF designed Infinite Impulse Response (IIR) filter. The IIR filter is a 16th order elliptic type and it is consist of eight biquad filters in series. We described how to evaluate the stability of IIR filter theoretically by calculating frequency response, Pole and Zero Layout and impulse response of each biquad filter, and experimentally by adding overflow detection circuit on each filters and input overlord signal.

  11. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  12. Coal-fired high performance power generating system. Quarterly progress report, October 1, 1994--December 31, 1994

    SciTech Connect

    1995-08-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal-Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of (1) > 47% thermal efficiency; (2) NO{sub x}, SO{sub x} and particulates {<=}25% NSPS; (3) cost {>=}65% of heat input; (4) all solid wastes benign. In our design consideration, we have tried to render all waste streams benign and if possible convert them to a commercial product. It appears that vitrified slag has commercial values. If the flyash is reinjected through the furnace, along with the dry bottom ash, then the amount of the less valuable solid waste stream (ash) can be minimized. A limitation on this procedure arises if it results in the buildup of toxic metal concentrations in either the slag, the flyash or other APCD components. We have assembled analytical tools to describe the progress of specific toxic metals in our system. The outline of the analytical procedure is presented in the first section of this report. The strengths and corrosion resistance of five candidate refractories have been studied in this quarter. Some of the results are presented and compared for selected preparation conditions (mixing, drying time and drying temperatures). A 100 hour pilot-scale stagging combustor test of the prototype radiant panel is being planned. Several potential refractory brick materials are under review and five will be selected for the first 100 hour test. The design of the prototype panel is presented along with some of the test requirements.

  13. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    SciTech Connect

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-05-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  14. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  15. Use of subcarrier multiplexing for self-routing of data packets in a high-performance system area network

    NASA Astrophysics Data System (ADS)

    Saraswat, Sanjay

    1998-10-01

    In self-routing packet networks, the state of intermediate nodes (switches) is set or reset on the basis of the information present in the packet header. Subcarrier multiplexing (SCM) modulates a number of frequency-separated RF sub-carriers onto a common laser at a single wavelength. SCM has the advantage of high data throughput. It also requires fewer opto-electronic components and avoids walk- off between header and payload due to fiber dispersion. In this paper we describe a novel use of sub-carrier multiplexing for self-routing of data packet within the switching fabric of a high performance system area network. Using SCM data packets are routed optically to the destination without being converted to the electrical domain at the intermediate stages within the network.

  16. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  17. Evaluation of C/C-SiC Composites as Potential Candidate Materials for High Performance Braking Systems

    NASA Astrophysics Data System (ADS)

    Saptono Duryat, Rahmat

    2016-05-01

    This paper is aimed at evaluating the characteristic and performance of C/C-SiC composites as potential candidate materials for high performance braking system. A set of material specifications had been derived from specific engineering design requirements. Analysis was performed by formulating the function(s), constraint(s), and objective(s) of design and materials selection. Function of a friction material is chiefly to provide friction, absorb and dissipate energy. It is done while withstanding load and maintaining the structural adequacy and characteristic of tribology at high temperature. Objective of the material selection and design is to maximize the absorption and dissipation of energy and to minimize weight and cost. Candidate materials were evaluated based on their friction and wear, thermal capacity and conductivity, structural properties, manufacturing properties, and densities. The present paper provides a state of the art example on how materials - function - geometry - design, are all interrelated.

  18. New generation high performance in situ polarized 3He system for time-of-flight beam at spallation sources

    NASA Astrophysics Data System (ADS)

    Jiang, C. Y.; Tong, X.; Brown, D. R.; Glavic, A.; Ambaye, H.; Goyette, R.; Hoffmann, M.; Parizzi, A. A.; Robertson, L.; Lauter, V.

    2017-02-01

    Modern spallation neutron sources generate high intensity neutron beams with a broad wavelength band applied to exploring new nano- and meso-scale materials from a few atomic monolayers thick to complicated prototype device-like systems with multiple buried interfaces. The availability of high performance neutron polarizers and analyzers in neutron scattering experiments is vital for understanding magnetism in systems with novel functionalities. We report the development of a new generation of the in situ polarized 3He neutron polarization analyzer for the Magnetism Reflectometer at the Spallation Neutron Source at Oak Ridge National Laboratory. With a new optical layout and laser system, the 3He polarization reached and maintained 84% as compared to 76% in the first-generation system. The polarization improvement allows achieving the transmission function varying from 50% to 15% for the polarized neutron beam with the wavelength band of 2-9 Angstroms. This achievement brings a new class of experiments with optimal performance in sensitivity to very small magnetic moments in nano systems and opens up the horizon for its applications.

  19. Development of a high-performance gantry system for a new generation of optical slope measuring profilers

    NASA Astrophysics Data System (ADS)

    Assoufid, Lahsen; Brown, Nathan; Crews, Dan; Sullivan, Joseph; Erdmann, Mark; Qian, Jun; Jemian, Pete; Yashchuk, Valeriy V.; Takacs, Peter Z.; Artemiev, Nikolay A.; Merthe, Daniel J.; McKinney, Wayne R.; Siewert, Frank; Zeschke, Thomas

    2013-05-01

    A new high-performance metrology gantry system has been developed within the scope of collaborative efforts of optics groups at the US Department of Energy synchrotron radiation facilities as well as the BESSY-II synchrotron at the Helmholtz Zentrum Berlin (Germany) and the participation of industrial vendors of x-ray optics and metrology instrumentation directed to create a new generation of optical slope measuring systems (OSMS) [1]. The slope measurement accuracy of the OSMS is expected to be <50 nrad, which is strongly required for the current and future metrology of x-ray optics for the next generation of light sources. The fabricated system was installed and commissioned (December 2012) at the Advanced Photon Source (APS) at Argonne National Laboratory to replace the aging APS Long Trace Profiler (APS LTP-II). Preliminary tests were conducted (in January and May 2012) using the optical system configuration of the Nanometer Optical Component Measuring Machine (NOM) developed at Helmholtz Zentrum Berlin (HZB)/BESSY-II. With a flat Si mirror that is 350 mm long and has 200 nrad rms nominal slope error over a useful length of 300 mm, the system provides a repeatability of about 53 nrad. This value corresponds to the design performance of 50 nrad rms accuracy for inspection of ultra-precise flat optics.

  20. Monitoring and preparation of neoagaro- and agaro-oligosaccharide products by high performance anion exchange chromatography systems.

    PubMed

    Kazłowski, Bartosz; Pan, Chorng Liang; Ko, Yuan Tih

    2015-05-20

    A series of neoagaro-oligosaccharides (NAOS) were prepared by β-agarase digestion and agaro-oligosaccharides (AOS) by HCl hydrolysis from agarose with defined quantity and degree of polymerization (DP). Chain-length distribution in the crude product mixtures were monitored by two high performance anion exchange chromatography systems coupled with a pulsed amperometric detector. Method 1 utilized two separation columns: a CarboPac(™) PA1 and a CarboPac(™) PA100 connected in series and method 2 used the PA100 alone. Method 1 resolved the product in size ranges consisting of DP 1-46 for NAOS and DP 1-32 for AOS. Method 2 clearly resolved saccharide product sizes within DP 26. The optimized system utilizing a semi-preparative CarboPac(™) PA100 column was connected with a fraction collector to isolate and quantify individually separated products. This study established systems for the preparation and qualitative and quantitative measurements as well as for the isolation of various sizes of oligomers generated from agarose.

  1. Engineering development of coal-fired high-performance power systems. Progress report, April 1--June 30, 1996

    SciTech Connect

    1996-12-31

    In Phase 1 of the project, a conceptual design of a coal-fired, high-performance power system (HIPPS) was developed, and small-scale R and D was done in critical areas of the design. The current phase of the project includes development through the pilot plant stage and design of a prototype plant that would be built in Phase 3. The power-generating system being developed in this project will be an improvement over current coal-fired systems. It is a combined-cycle plant. This arrangement is referred to as the All Coal HIPPS because it does not require any other fuels for normal operation. A fluidized bed, air-blown pyrolyzer converts coal into fuel gas and char. The char is fired in a high-temperature advanced furnace (HITAF) which heats both air for a gas turbine and steam for a steam turbine. The fuel gas from the pyrolyzer goes to a topping combustor where it is used to raise the air entering the gas turbine to 1288 C. In addition to the HITAF, steam duty is achieved with a heat-recovery steam generator (HRSG) in the gas turbine exhaust stream and economizers in the HITAF flue gas exhaust stream. Progress during the quarter is described.

  2. A high performance system to study the influence of temperature in on-line solid-phase extraction capillary electrophoresis.

    PubMed

    Tascon, Marcos; Benavente, Fernando; Sanz-Nebot, Victoria; Gagliardi, Leonardo G

    2015-03-10

    A novel high performance system to control the temperature of the microcartridge in on-line solid phase extraction capillary electrophoresis (SPE-CE) is introduced. The mini-device consists in a thermostatic bath that fits inside of the cassette of any commercial CE instrument, while its temperature is controlled from an external circuit of liquid connecting three different water baths. The circuits are controlled from a switchboard connected to an array of electrovalves that allow to rapidly alternate the water circulation through the mini-thermostatic-bath between temperatures from 5 to 90 °C. The combination of the mini-device and the forced-air thermostatization system of the commercial CE instrument allows to optimize independently the temperature of the sample loading, the clean-up, the analyte elution and the electrophoretic separation steps. The system is used to study the effect of temperature on the C18-SPE-CE analysis of the opioid peptides, Dynorphin A (Dyn A), Endomorphin1 (END) and Met-enkephalin (MET), in both standard solutions and in spiked plasma samples. Extraction recoveries demonstrated to depend, with a non-monotonous trend, on the microcartridge temperature during the sample loading and became maximum at 60 °C. Results prove the potential of temperature control to further enhance sensitivity in SPE-CE when analytes are thermally stable.

  3. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans J.

    1991-01-01

    NASA has initiated technology development programs to develop advanced solar dynamic power systems and components for space applications beyond 2000. Conceptual design work that was performed is described. The main efforts were the: (1) conceptual design of self-deploying, high-performance parabolic concentrator; and (2) materials selection for a lightweight, shape-stable concentrator. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes an annular shape with a void in the center. This deployable concentrator concept is applicable to a range of solar dynamic power systems of 25 kW sub e to in excess of 75 kW sub e. The concept allows for a family of power system sizes all using the same packaging and deployment technique. The primary structural material selected for the concentrator is a polyethyl ethylketone/carbon fiber composite also referred to as APC-2 or Vitrex. This composite has a nearly neutral coefficient of thermal expansion which leads to shape stable characteristics under thermal gradient conditions. Substantial efforts were undertaken to produce a highly specular surface on the composite. The overall coefficient of thermal expansion of the composite laminate is near zero, but thermally induced stresses due to micro-movement of the fibers and matrix in relation to each other cause the surface to become nonspecular.

  4. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1996-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA-High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high order characteristics of the system. In this paper, only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles at attack : 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  5. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1999-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high-order characteristics of the system. In this paper only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles of attack: 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of the identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the estimated closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  6. A high performance liquid chromatography system for quantification of hydroxyl radical formation by determination of dihydroxy benzoic acids.

    PubMed

    Owen, R W; Wimonwatwatee, T; Spiegelhalder, B; Bartsch, H

    1996-08-01

    The hypoxanthine/xanthine oxidase enzyme system is known to produce the superoxide ion and hydrogen peroxide during the hydroxylation of hypoxanthine via xanthine to uric acid. When chelated iron is included in this system, superoxide reduces iron (III) to iron(II) and the iron(II)-chelate further reacts with hydrogen peroxide to form the highly reactive hydroxyl radical. Because of the limitations of colourimetric and spectrophotometric techniques by which, to date, the mechanisms of hydroxyl radical formation in the hypoxanthine/xanthine oxidase system have been monitored, a high performance liquid chromatography method utilizing the ion-pair reagent tetrabutylammonium hydroxide and salicylic acid as an aromatic probe for quantification of hydroxyl radical formation was set up. In the hypoxanthine/xanthine oxidase system the major products of hydroxyl radical attack on salicylic acid were 2,5-dihydroxy benzoic acid and 2,3-dihydroxy benzoic acid in the approximate ratio of 5:1. That the hydroxyl radical is involved in the hydroxylation of salicylic acid in this system was demonstrated by the potency especially of dimethyl sulphoxide, butanol and ethanol as scavengers. Phytic acid, which is considered to be an important protective dietary constituent against colorectal cancer, inhibited hydroxylation of salicylic acid at a concentration one order of magnitude lower than the classical scavengers, but was only effective in the absence of EDTA. The method has been applied to the study of free radical generation in faeces, and preliminary results indicate that the faecal flora are able to produce reactive oxygen species in abundance.

  7. Miniaturized ultra-high performance liquid chromatography coupled to electrochemical detection: Investigation of system performance for neurochemical analysis.

    PubMed

    Van Schoors, Jolien; Maes, Katrien; Van Wanseele, Yannick; Broeckhoven, Ken; Van Eeckhaut, Ann

    2016-01-04

    The interest in implementation of miniaturized ultra-high performance liquid chromatography (UHPLC) in neurochemical research is growing because of the need for faster, more selective and more sensitive neurotransmitter analyses. The instrument performance of a tailor designed microbore UHPLC system coupled to electrochemical detection (ECD) is investigated, focusing on the quantitative monoamine determination in in vivo microdialysis samples. The use of a microbore column (1.0mm I.D.) requires miniaturization of the entire instrument, though a balance between extra-column band broadening and injection volume must be considered. This is accomplished through the user defined Performance Optimizing Injection Sequence, whereby 5 μL sample is injected on the column with a measured extra-column variance of 4.5-9.0 μL(2) and only 7 μL sample uptake. Different sub-2 μm and superficially porous particle stationary phases are compared by means of the kinetic plot approach. Peak efficiencies of about 16000-35000 theoretical plates are obtained for the Acquity UPLC BEH C18 column within 13 min analysis time. Furthermore, the coupling to ECD is shown suitable for microbore UHPLC analysis thanks to the miniaturized flow cell design, sufficiently fast data acquisition and mathematical data filtering. Ultimately, injection of in vivo samples demonstrates the applicability of the system for microdialysis analysis.

  8. A film bulk acoustic resonator-based high-performance pressure sensor integrated with temperature control system

    NASA Astrophysics Data System (ADS)

    Zhang, Mengying; Zhao, Zhan; Du, Lidong; Fang, Zhen

    2017-04-01

    This paper presented a high-performance pressure sensor based on a film bulk acoustic resonator (FBAR). The support film of the FBAR chip was made of silicon nitride and the part under the resonator area was etched to enhance the sensitivity and improve the linearity of the pressure sensor. A micro resistor temperature sensor and a micro resistor heater were integrated in the chip to monitor and control the operating temperature. The sensor chip was fabricated, and packaged in an oscillator circuit for differential pressure detection. When the detected pressure ranged from  ‑100 hPa to 600 hPa, the sensitivity of the improved FBAR pressure sensor was  ‑0.967 kHz hPa‑1, namely  ‑0.69 ppm hPa‑1, which was 19% higher than that of existing sensors with a complete support film. The nonlinearity of the improved sensor was less than  ±0.35%, while that of the existing sensor was  ±5%. To eliminate measurement errors from humidity, the temperature control system integrated in the sensor chip controlled the temperature of the resonator up to 75 °C, with accuracy of  ±0.015 °C and power of 20 mW.

  9. [Determination of 61 central nervous system drugs in plasma by protein precipitation-high performance liquid chromatography].

    PubMed

    Zhang, Yin; Chen, Chonghong; Lin, Ling; Chen, Yinong

    2009-11-01

    A method was established for the determination of 61 central nervous system drugs in plasma by using protein precipitation combined with high performance liquid chromatography-diode array detection (HPLC-DAD). A volume of 1.5 mL acetonitrile was added into 1 mL plasma, after vortex, centrifugation and filtration, the supernatant was directly injected into HPLC. The separation was performed on an Agilent TC-C18 column (250 mm x 4.6 mm, 5 microm) with acetonitrile and phosphate buffer solution as mobile phase by gradient elution at a flow rate of 1.5 mL/min. The detection wavelength was 210 nm; full spectra were recorded from 200-364 nm. The recoveries of 61 drugs were larger than 80% with the relative standard deviations (RSDs) ranged from 0.94% to 11.23%. The protein precipitation method is simple, rapid, low-cost with good recoveries, reproducibility and suitable for the general pretreatment of the systematic toxicological analysis (STA) of the 61 drugs.

  10. Synthesis and Characterization of High Performance Polyimides Containing the Bicyclo(2.2.2)oct-7-ene Ring System

    NASA Technical Reports Server (NTRS)

    Alvarado, M.; Harruna, I. I.; Bota, K. B.

    1997-01-01

    Due to the difficulty in processing polyimides with high temperature stability and good solvent resistance, we have synthesized high performance polyimides with bicyclo(2.2.2)-oct-7-ene ring system which can easily be fabricated into films and fibers and subsequently converted to the more stable aromatic polyimides. In order to improve processability, we prepared two polyimides by reacting 1,4-phenylenediamine and 1,3phenylediamine with bicyclo(2.2.2)-7-octene-2,3,5,6-tetracarboxylic dianhydride. The polyimides were characterized by FTIR, FTNMR, solubility and thermal analysis. Thermogravimetric analysis (TGA) showed that the 1,4-phenylenediamine and 1,3-phenylenediamine containing polyimides were stable up to 460 and 379 C, respectively under nitrogen atmosphere. No melting transitions were observed for both polyimides. The 1,4-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, methane sulfonic acid and soluble in sulfuric acid at room temperature. The 1,3-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, tetramethyl urea, N,N-dimethyl acetamide and soluble in methane sulfonic acid and sulfuric acid.

  11. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new

  12. High-Performance SiC/SiC Ceramic Composite Systems Developed for 1315 C (2400 F) Engine Components

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.; Yun, Hee Mann; Morscher, Gregory N.; Bhatt, Ramakrishna T.

    2004-01-01

    As structural materials for hot-section components in advanced aerospace and land-based gas turbine engines, silicon carbide (SiC) ceramic matrix composites reinforced by high performance SiC fibers offer a variety of performance advantages over current bill-of-materials, such as nickel-based superalloys. These advantages are based on the SiC/SiC composites displaying higher temperature capability for a given structural load, lower density (approximately 30- to 50-percent metal density), and lower thermal expansion. These properties should, in turn, result in many important engine benefits, such as reduced component cooling air requirements, simpler component design, reduced support structure weight, improved fuel efficiency, reduced emissions, higher blade frequencies, reduced blade clearances, and higher thrust. Under the NASA Ultra-Efficient Engine Technology (UEET) Project, much progress has been made at the NASA Glenn Research Center in identifying and optimizing two highperformance SiC/SiC composite systems. The table compares typical properties of oxide/oxide panels and SiC/SiC panels formed by the random stacking of balanced 0 degrees/90 degrees fabric pieces reinforced by the indicated fiber types. The Glenn SiC/SiC systems A and B (shaded area of the table) were reinforced by the Sylramic-iBN SiC fiber, which was produced at Glenn by thermal treatment of the commercial Sylramic SiC fiber (Dow Corning, Midland, MI; ref. 2). The treatment process (1) removes boron from the Sylramic fiber, thereby improving fiber creep, rupture, and oxidation resistance and (2) allows the boron to react with nitrogen to form a thin in situ grown BN coating on the fiber surface, thereby providing an oxidation-resistant buffer layer between contacting fibers in the fabric and the final composite. The fabric stacks for all SiC/SiC panels were provided to GE Power Systems Composites for chemical vapor infiltration of Glenn designed BN fiber coatings and conventional SiC matrices

  13. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NASA Astrophysics Data System (ADS)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  14. Methylmercury determination using a hyphenated high performance liquid chromatography ultraviolet cold vapor multipath atomic absorption spectrometry system

    NASA Astrophysics Data System (ADS)

    Campos, Reinaldo C.; Gonçalves, Rodrigo A.; Brandão, Geisamanda P.; Azevedo, Marlo S.; Oliveira, Fabiana; Wasserman, Julio

    2009-06-01

    The present work investigates the use of a multipath cell atomic absorption mercury detector for mercury speciation analysis in a hyphenated high performance liquid chromatography assembly. The multipath absorption cell multiplies the optical path while energy losses are compensated by a very intense primary source. Zeeman-effect background correction compensates for non-specific absorption. For the separation step, the mobile phase consisted in a 0.010% m/v mercaptoethanol solution in 5% methanol (pH = 5), a C 18 column was used as stationary phase, and post column treatment was performed by UV irradiation (60 °C, 13 W). The eluate was then merged with 3 mol L - 1 HCl, reduction was performed by a NaBH 4 solution, and the Hg vapor formed was separated at the gas-liquid separator and carried through a desiccant membrane to the detector. The detector was easily attached to the system, since an external gas flow to the gas-liquid separator was provided. A multivariate approach was used to optimize the procedure and peak area was used for measurement. Instrumental limits of detection of 0.05 µg L - 1 were obtained for ionic (Hg 2+) and HgCH 3+, for an injection volume of 200 µL. The multipath atomic absorption spectrometer proved to be a competitive mercury detector in hyphenated systems in relation to the most commonly used atomic fluorescence and inductively coupled plasma mass spectrometric detectors. Preliminary application studies were performed for the determination of methyl mercury in sediments.

  15. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted

  16. Potential application of ultra-high performance fiber-reinforced concrete with wet-mix shotcrete system in tunneling

    NASA Astrophysics Data System (ADS)

    Goblet, Valentine Pascale

    In the tunneling industry, shotcrete has been used for several decades. The use of shotcrete or wet-mix spray-on methods allows the application of this method in complex underground profiles and shapes. The need for time efficient spraying methods and constructability for lining coverage opens the door for technologies like steel and synthetic fiber reinforced shotcrete to achieve a uniform and a good quality product. An important advantage of the application of fiber reinforced concrete in shotcrete systems for tunneling is that almost no steel fixing is required. This leads to several other advantages including safer working conditions during excavation, less cost, and higher quality achieved through the use of this new technology. However, there are still some limitations. This research presents an analysis and evaluation of the potential application of a new R&D product, ultra-high-performance fiber-reinforced concrete (UHP-FRC), developed by UTA associate professor Shih-Ho (Simon) Chao. This research will focus on its application to tunnel lining using a wet-mix shotcrete system. The objectives of this study are to evaluate the potential application of UHP-FRC with wet-mix shotcrete equipment. This is the first time UHP-FRC has been used for this purpose; hence, this thesis also presents a preliminary evaluation of the compressive and tensile strength of UHP-FRC after application with shotcrete equipment, and to identify proper shotcrete procedures for mixing and application of UHP-FRC. A test sample was created with the wet-mix shotcrete system for further compressive and tensile strength analysis and a proposed plan was developed on the best way to use the UHP-FRC in lining systems for the tunneling industry. As a result of this study, the viscosity for pumpability was achieved for UHP-FRC. However, the mixer was not fast enough to efficiently mix this material. After 2 days, material strength showed 7,200 psi, however, vertical shotcrete was not achieved

  17. Inverse opal-inspired, nanoscaffold battery separators: a new membrane opportunity for high-performance energy storage systems.

    PubMed

    Kim, Jung-Hwan; Kim, Jeong-Hoon; Choi, Keun-Ho; Yu, Hyung Kyun; Kim, Jong Hun; Lee, Joo Sung; Lee, Sang-Young

    2014-08-13

    The facilitation of ion/electron transport, along with ever-increasing demand for high-energy density, is a key to boosting the development of energy storage systems such as lithium-ion batteries. Among major battery components, separator membranes have not been the center of attention compared to other electrochemically active materials, despite their important roles in allowing ionic flow and preventing electrical contact between electrodes. Here, we present a new class of battery separator based on inverse opal-inspired, seamless nanoscaffold structure ("IO separator"), as an unprecedented membrane opportunity to enable remarkable advances in cell performance far beyond those accessible with conventional battery separators. The IO separator is easily fabricated through one-pot, evaporation-induced self-assembly of colloidal silica nanoparticles in the presence of ultraviolet (UV)-curable triacrylate monomer inside a nonwoven substrate, followed by UV-cross-linking and selective removal of the silica nanoparticle superlattices. The precisely ordered/well-reticulated nanoporous structure of IO separator allows significant improvement in ion transfer toward electrodes. The IO separator-driven facilitation of the ion transport phenomena is expected to play a critical role in the realization of high-performance batteries (in particular, under harsh conditions such as high-mass-loading electrodes, fast charging/discharging, and highly polar liquid electrolyte). Moreover, the IO separator enables the movement of the Ragone plot curves to a more desirable position representing high-energy/high-power density, without tailoring other battery materials and configurations. This study provides a new perspective on battery separators: a paradigm shift from plain porous films to pseudoelectrochemically active nanomembranes that can influence the charge/discharge reaction.

  18. High-performance two-axis gimbal system for free space laser communications onboard unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    Locke, Michael; Czarnomski, Mariusz; Qadir, Ashraf; Setness, Brock; Baer, Nicolai; Meyer, Jennifer; Semke, William H.

    2011-03-01

    A custom designed and manufactured gimbal with a wide field-of-view and fast response time is developed. This enhanced custom design is a 24 volt system with integrated motor controllers and drivers which offers a full 180o fieldof- view in both azimuth and elevation; this provides a more continuous tracking capability as well as increased velocities of up to 479° per second. The addition of active high-frequency vibration control, to complement the passive vibration isolation system, is also in development. The ultimate goal of this research is to achieve affordable, reliable, and secure air-to-air laser communications between two separate remotely piloted aircraft. As a proof-of-concept, the practical implementation of an air-to-ground laserbased video communications payload system flown by a small Unmanned Aerial Vehicle (UAV) will be demonstrated. A numerical tracking algorithm has been written, tested, and used to aim the airborne laser transmitter at a stationary ground-based receiver with known GPS coordinates; however, further refinement of the tracking capabilities is dependent on an improved gimbal design for precision pointing of the airborne laser transmitter. The current gimbal pointing system is a two-axis, commercial-off-the-shelf component, which is limited in both range and velocity. The current design is capable of 360o of pan and 78o of tilt at a velocity of 60o per second. The control algorithm used for aiming the gimbal is executed on a PC-104 format embedded computer onboard the payload to accurately track a stationary ground-based receiver. This algorithm autonomously calculates a line-of-sight vector in real-time by using the UAV autopilot's Differential Global Positioning System (DGPS) which provides latitude, longitude, and altitude and Inertial Measurement Unit (IMU) which provides the roll, pitch, and yaw data, along with the known Global Positioning System (GPS) location of the ground-based photodiode array receiver.

  19. Development of a High-Performance Dual-Energy Chest Imaging System: Initial Investigation of Diagnostic Performance

    PubMed Central

    Kashani, H.; Gang, G.J.; Shkumat, N. A.; Varon, C. A.; Yorkston, J.; Van Metter, R.; Paul, N. S.; Siewerdsen, J. H.

    2009-01-01

    Rationale and Objectives To assess the performance of a newly developed dual-energy (DE) chest radiography system in comparison to digital radiography (DR) in the detection and characterization of lung nodules. Materials and Methods An experimental prototype has been developed for high-performance DE chest imaging with total dose equivalent to a single posterior-anterior DR image. Low- and high-kVp projections were used to decompose DE soft-tissue and bone images. A cohort of 55 patients (31 male, 24 female, mean age 65.6 years) was drawn from an ongoing trial involving patients referred for percutaneous CT guided biopsy of suspicious lung nodules. DE and DR images were acquired of each patient prior to biopsy. Image quality was assessed by means of human observer tests involving 5 radiologists independently rating the detection and characterization of lung nodules on a 9-point scale. Results were analyzed in terms of the fraction of cases at or above a given rating, and statistical significance was evaluated from a Wilcoxon signed rank test. Performance was analyzed for all cases pooled as well as by stratification of nodule size, density, lung region, and chest thickness. Results The studies demonstrate a significant performance advantage for DE imaging compared to DR (p<0.001) in the detection and characterization of lung nodules. DE imaging improved the detection of both small and large nodules and exhibited the most significant improvement in regions of the upper lobes, where overlying anatomical noise (ribs and clavicles) are believed to reduce nodule conspicuity in DR. Conclusions DE imaging outperformed DR overall, particularly in the detection of small, solid nodules. DE imaging also performed better in regions dominated by anatomical noise such as the lung apices. The potential for improved nodule detection and characterization at radiation doses equivalent to DR is encouraging and could augment broader utilization of DE imaging. F studies will extend the

  20. Engineering development of coal-fired high performance power systems, Phase II and Phase III. Quarter progress report, April 1, 1996--June 30, 1996

    SciTech Connect

    1996-11-01

    Work is presented on the development of a coal-fired high performance power generation system by the year 2000. This report describes the design of the air heater, duct heater, system controls, slag viscosity, and design of a quench zone.

  1. Direct determination of benzalkonium chloride in ophthalmic systems by reversed-phase high-performance liquid chromatography.

    PubMed

    Ambrus, G; Takahashi, L T; Marty, P A

    1987-02-01

    High-performance liquid chromatography has been used to quantitate benzalkonium chloride (alkylbenzyldimethylammonium chloride) in complex ophthalmic formulations at or below concentration levels of 50 ppm. The method involves a one-step dilution for sample preparation and direct injection; therefore, recovery and/or conversion problems are nonexistent. The assay is quick, specific, reproducible, and simple. This new approach makes routine determinations far simpler than previous methods and is especially useful for product stability studies and quality control procedures.

  2. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  3. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  4. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  5. CLUPI, a high-performance imaging system on the rover of the 2018 mission to discover biofabrics on Mars

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Westall, F.; Hofmann, B. A.; Spray, J. G.; Cockell, C.; Kempe, S.; Griffiths, A. D.; Coradini, A.; Colangeli, L.; Koschny, D.; Pullan, D.; Föllmi, K.; Diamond, L.; Josset, M.; Javaux, E.; Esposito, F.

    2011-10-01

    The scientific objectives of the 2018 ExoMars rover mission are to search for traces of past or present life and to characterise the near-sub surface. Both objectives require study of the rock/regolith materials in terms of structure, textures, mineralogy, and elemental and organic composition. The 2018 ExoMars rover payload consists of a suite of complementary instruments designed to reach these objectives. CLUPI, the high-performance colour close up imager, on board the 2018 ExoMars Rover plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. CLUPI is a powerful, highly integrated miniaturized (<700g) low-power robust imaging system, able to operate at very low temperatures (-120°C). CLUPI has a working distance from 10cm to infinite providing outstanding pictures with a color detector of 2652x1768. At 10cm, the resolution is 7 micrometer/pixel in color. The optical-mechanical interface is a smart assembly in titanium that can sustain a wide temperature range. The concept benefits from well-proven heritage: Proba, Rosetta, MarsExpress and Smart-1 missions… In a typical field scenario, the geologist will use his/her eyes to make an overview of an area and the outcrops within it to determine sites of particular interest for more detailed study. In the ExoMars scenario, the PanCam wide angle cameras (WACS) will be used for this task. After having made a preliminary general evaluation, the geologist will approach a particular outcrop for closer observation of structures at the decimetre to subdecimeter scale (ExoMars' High Resolution Camera) before finally getting very close up to the surface with a hand lens (ExoMars' CLUPI), and/or taking a hand specimen, for detailed observation of textures and minerals. Using structural, textural and preliminary compositional analysis, the geologist identifies the materials and makes a decision as to whether they are of

  6. The choice of the principle of functioning of the system of magnetic levitation for the device of high-performance testing of powder permanent magnets

    NASA Astrophysics Data System (ADS)

    Shaykhutdinov, D. V.; Gorbatenko, N. I.; Narakidze, N. D.; Vlasov, A. S.; Stetsenko, I. A.

    2017-02-01

    The present article focuses on permanent magnets quality control problems. High-performance direct-flow type systems for the mechanical engineering production processes are considered. The main lack of the existing high-performance direct-flow type systems is a completing phase of movement of a tested product when the movement is oscillatory and abrupt braking may be harmful for high fragility samples. A special system for permanent magnets control is offered. The system realizes the magnetic levitation of a test sample. Active correction of the electric current in magnetizing coils as the basic functioning principle of this system is offered. The system provides the required parameters of the movement of the test sample by using opposite connection of magnetizing coils. This new technique provides aperiodic nature of the movement and limited acceleration with saving of high accuracy and required timeframe of the installation in the measuring position.

  7. Department of Energy Project ER25739 Final Report QoS-Enabled, High-performance Storage Systems for Data-Intensive Scientific Computing

    SciTech Connect

    Rangaswami, Raju

    2009-05-31

    This project's work resulted in the following research projects: (1) BORG - Block-reORGanization for Self-optimizing Storage Systems; (2) ABLE - Active Block Layer Extensions; (3) EXCES - EXternal Caching in Energy-Saving Storage Systems; (4) GRIO - Guaranteed-Rate I/O Scheduler. These projects together help in substantially advancing the over-arching project goal of developing 'QoS-Enabled, High-Performance Storage Systems'.

  8. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  9. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    (LEED®) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

  10. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  11. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.

  12. Identification of high performance and component technology for space electrical power systems for use beyond the year 2000

    NASA Technical Reports Server (NTRS)

    Maisel, James E.

    1988-01-01

    Addressed are some of the space electrical power system technologies that should be developed for the U.S. space program to remain competitive in the 21st century. A brief historical overview of some U.S. manned/unmanned spacecraft power systems is discussed to establish the fact that electrical systems are and will continue to become more sophisticated as the power levels appoach those on the ground. Adaptive/Expert power systems that can function in an extraterrestrial environment will be required to take an appropriate action during electrical faults so that the impact is minimal. Manhours can be reduced significantly by relinquishing tedious routine system component maintenance to the adaptive/expert system. By cataloging component signatures over time this system can set a flag for a premature component failure and thus possibly avoid a major fault. High frequency operation is important if the electrical power system mass is to be cut significantly. High power semiconductor or vacuum switching components will be required to meet future power demands. System mass tradeoffs have been investigated in terms of operating at high temperature, efficiency, voltage regulation, and system reliability. High temperature semiconductors will be required. Silicon carbide materials will operate at a temperature around 1000 K and the diamond material up to 1300 K. The driver for elevated temperature operation is that radiator mass is reduced significantly because of inverse temperature to the fourth power.

  13. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  14. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  15. An open, parallel I/O computer as the platform for high-performance, high-capacity mass storage systems

    NASA Technical Reports Server (NTRS)

    Abineri, Adrian; Chen, Y. P.

    1992-01-01

    APTEC Computer Systems is a Portland, Oregon based manufacturer of I/O computers. APTEC's work in the context of high density storage media is on programs requiring real-time data capture with low latency processing and storage requirements. An example of APTEC's work in this area is the Loral/Space Telescope-Data Archival and Distribution System. This is an existing Loral AeroSys designed system, which utilizes an APTEC I/O computer. The key attributes of a system architecture that is suitable for this environment are as follows: (1) data acquisition alternatives; (2) a wide range of supported mass storage devices; (3) data processing options; (4) data availability through standard network connections; and (5) an overall system architecture (hardware and software designed for high bandwidth and low latency). APTEC's approach is outlined in this document.

  16. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  17. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  18. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples.

  19. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    SciTech Connect

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D. R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A. F.; Ratti, A.; Sabbi, G. L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-08-17

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb{sub 3}Sn dipole.

  20. Aquarius Project: Research in the System Architecture of Accelerators for the High Performance Execution of Logic Programs.

    DTIC Science & Technology

    1991-05-31

    University of California c/o Sponsored Projects Office University of California Berkeley, California 94720 Subcontractor: Electrical Engineering Systems...for the high perfor- mance execution of logic programs. It was conducted by the Electrical Engineering - Systems Department of the University of...program ( engine ) module, and a knowledge base. Each level accepts a specification in a formal specialized language and produces a more detailed and

  1. Report of the Defense Science Board 1981 Summer Study Panel on Operational Readiness with High Performance Systems

    DTIC Science & Technology

    1982-04-01

    deci- sion to employ automated fault detection and isolation may permit more effective system operation with less skilled personnel. However, if poorly...personnel or building the fault detection and isolation system over again. There are many other choices which must be made early in a program and...what trainin&, required to support the concept? If sophisticated fault detection and isolation techniques are to be used, what demands will be placed

  2. Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection

    NASA Astrophysics Data System (ADS)

    Gorbachev, Alexey A.; Serikova, Mariya G.; Pantyushina, Ekaterina N.; Volkova, Daria A.

    2016-04-01

    Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of +/-1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.

  3. A new high-performance heterologous fungal expression system based on regulatory elements from the Aspergillus terreus terrein gene cluster.

    PubMed

    Gressler, Markus; Hortschansky, Peter; Geib, Elena; Brock, Matthias

    2015-01-01

    Recently, the Aspergillus terreus terrein gene cluster was identified and selected for development of a new heterologous expression system. The cluster encodes the specific transcription factor TerR that is indispensable for terrein cluster induction. To identify TerR binding sites, different recombinant versions of the TerR DNA-binding domain were analyzed for specific motif recognition. The high affinity consensus motif TCGGHHWYHCGGH was identified from genes required for terrein production and binding site mutations confirmed their essential contribution to gene expression in A. terreus. A combination of TerR with its terA target promoter was tested as recombinant expression system in the heterologous host Aspergillus niger. TerR mediated target promoter activation was directly dependent on its transcription level. Therefore, terR was expressed under control of the regulatable amylase promoter PamyB and the resulting activation of the terA target promoter was compared with activation levels obtained from direct expression of reporters from the strong gpdA control promoter. Here, the coupled system outcompeted the direct expression system. When the coupled system was used for heterologous polyketide synthase expression high metabolite levels were produced. Additionally, expression of the Aspergillus nidulans polyketide synthase gene orsA revealed lecanoric acid rather than orsellinic acid as major polyketide synthase product. Domain swapping experiments assigned this depside formation from orsellinic acid to the OrsA thioesterase domain. These experiments confirm the suitability of the expression system especially for high-level metabolite production in heterologous hosts.

  4. A new high-performance heterologous fungal expression system based on regulatory elements from the Aspergillus terreus terrein gene cluster

    PubMed Central

    Gressler, Markus; Hortschansky, Peter; Geib, Elena; Brock, Matthias

    2015-01-01

    Recently, the Aspergillus terreus terrein gene cluster was identified and selected for development of a new heterologous expression system. The cluster encodes the specific transcription factor TerR that is indispensable for terrein cluster induction. To identify TerR binding sites, different recombinant versions of the TerR DNA-binding domain were analyzed for specific motif recognition. The high affinity consensus motif TCGGHHWYHCGGH was identified from genes required for terrein production and binding site mutations confirmed their essential contribution to gene expression in A. terreus. A combination of TerR with its terA target promoter was tested as recombinant expression system in the heterologous host Aspergillus niger. TerR mediated target promoter activation was directly dependent on its transcription level. Therefore, terR was expressed under control of the regulatable amylase promoter PamyB and the resulting activation of the terA target promoter was compared with activation levels obtained from direct expression of reporters from the strong gpdA control promoter. Here, the coupled system outcompeted the direct expression system. When the coupled system was used for heterologous polyketide synthase expression high metabolite levels were produced. Additionally, expression of the Aspergillus nidulans polyketide synthase gene orsA revealed lecanoric acid rather than orsellinic acid as major polyketide synthase product. Domain swapping experiments assigned this depside formation from orsellinic acid to the OrsA thioesterase domain. These experiments confirm the suitability of the expression system especially for high-level metabolite production in heterologous hosts. PMID:25852654

  5. Imagining School Autonomy in High-Performing Education Systems: East Asia as a Source of Policy Referencing in England

    ERIC Educational Resources Information Center

    You, Yun; Morris, Paul

    2016-01-01

    Education reform is increasingly based on emulating the features of "world-class" systems that top international attainment surveys and, in England specifically, East Asia is referenced as the "inspiration" for their education reforms. However, the extent to which the features identified by the UK Government accord with the…

  6. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    PubMed

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  7. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems

    PubMed Central

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-01

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN’s MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS. PMID:28134853

  8. A Parallel Neuromorphic Text Recognition System and Its Implementation on a Heterogeneous High-Performance Computing Cluster

    DTIC Science & Technology

    2013-01-01

    on the 500 trillion floating point operations per second (TFLOPS) Air Force Research Laboratory (AFRL)/Information Directorate (RI) Condor HPC after... Condor HPC after performance optimization. Index Terms—Heterogeneous (hybrid) systems, distributed architecture, natural language interfaces...second (TFLOPS) Condor HPC cluster that was built at AFRL/RI in 2010. The Condor HPC consists of 78 subclusters and each subcluster is composed of dual

  9. A low-cost gradient system for high-performance liquid chromatography. Quantitation of complex pharmaceutical raw materials.

    PubMed

    Erni, F; Frei, R W

    1976-09-29

    A device is described that makes use of an eight-port motor valve to generate step gradients on the low-pressure side of a piston pump with a low dead volume. Such a gradient device with an automatic control unit, which also permits repetition of previous steps, can be built for about half the cost of a gradient system with two pumps. Applications of this gradient unit to the separation of complex mixtures of glycosides and alkaloids are discussed and compared with separations systems using two high-pressure pumps. The gradients that are used on reversed-phase material with solvent mixtures of water and completely miscible organic solvents are suitable for quantitative routine control of pharmaceutical products. The reproducibility of retention data is excellent over several months and, with the use of loop injectors, major components can be determined quantitatively with a reproducibility of better than 2% (relative standard deviation). The step gradient selector valve can also be used as an introduction system for very large sample volumes. Up to 11 can be injected and samples with concentrations of less than 1 ppb can be determined with good reproducibilities.

  10. Making resonance a common case: a high-performance implementation of collective I/O on parallel file systems

    SciTech Connect

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2009-01-01

    Collective I/O is a widely used technique to improve I/O performance in parallel computing. It can be implemented as a client-based or server-based scheme. The client-based implementation is more widely adopted in MPI-IO software such as ROMIO because of its independence from the storage system configuration and its greater portability. However, existing implementations of client-side collective I/O do not take into account the actual pattern offile striping over multiple I/O nodes in the storage system. This can cause a significant number of requests for non-sequential data at I/O nodes, substantially degrading I/O performance. Investigating the surprisingly high I/O throughput achieved when there is an accidental match between a particular request pattern and the data striping pattern on the I/O nodes, we reveal the resonance phenomenon as the cause. Exploiting readily available information on data striping from the metadata server in popular file systems such as PVFS2 and Lustre, we design a new collective I/O implementation technique, resonant I/O, that makes resonance a common case. Resonant I/O rearranges requests from multiple MPI processes to transform non-sequential data accesses on I/O nodes into sequential accesses, significantly improving I/O performance without compromising the independence ofa client-based implementation. We have implemented our design in ROMIO. Our experimental results show that the scheme can increase I/O throughput for some commonly used parallel I/O benchmarks such as mpi-io-test and ior-mpi-io over the existing implementation of ROMIO by up to 157%, with no scenario demonstrating significantly decreased performance.

  11. High-Performance Consensus Control in Networked Systems With Limited Bandwidth Communication and Time-Varying Directed Topologies.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang

    2016-02-08

    Communication data rates and energy constraints are two important factors that have to be considered in the coordination control of multiagent networks. Although some encoder-decoder-based consensus protocols are available, there still exists a fundamental theoretical problem: how can we further reduce the update rate of control input for each agent without the changing consensus performance? In this paper, we consider the problem of average consensus over directed and time-varying digital networks of discrete-time first-order multiagent systems with limited communication data transmission rates. Each agent has a real-valued state but can only exchange binary symbolic sequence with its neighbors due to bandwidth constraints. A class of novel event-triggered dynamic encoding and decoding algorithms is proposed, based on which a kind of consensus protocol is presented. Moreover, we develop a scheme to select the numbers of time-varying quantization levels for each connected communication channel in the time-varying directed topologies at each time step. The analytical relation among system and network parameters is characterized explicitly. It is shown that the asymptotic convergence rate is related to the scale of the network, the number of quantization levels, the system parameter, and the network structure. It is also found that under the designed event-triggered protocol, for a directed and time-varying digital network, which uniformly contains a spanning tree over a time interval, the average consensus can be achieved with an exponential convergence rate based on merely 1-b information exchange between each pair of adjacent agents at each time step.

  12. MO-G-17A-01: Innovative High-Performance PET Imaging System for Preclinical Imaging and Translational Researches

    SciTech Connect

    Sun, X; Lou, K; Deng, Z; Shao, Y

    2014-06-15

    Purpose: To develop a practical and compact preclinical PET with innovative technologies for substantially improved imaging performance required for the advanced imaging applications. Methods: Several key components of detector, readout electronics and data acquisition have been developed and evaluated for achieving leapfrogged imaging performance over a prototype animal PET we had developed. The new detector module consists of an 8×8 array of 1.5×1.5×30 mm{sup 3} LYSO scintillators with each end coupled to a latest 4×4 array of 3×3 mm{sup 2} Silicon Photomultipliers (with ∼0.2 mm insensitive gap between pixels) through a 2.0 mm thick transparent light spreader. Scintillator surface and reflector/coupling were designed and fabricated to reserve air-gap to achieve higher depth-of-interaction (DOI) resolution and other detector performance. Front-end readout electronics with upgraded 16-ch ASIC was newly developed and tested, so as the compact and high density FPGA based data acquisition and transfer system targeting 10M/s coincidence counting rate with low power consumption. The new detector module performance of energy, timing and DOI resolutions with the data acquisition system were evaluated. Initial Na-22 point source image was acquired with 2 rotating detectors to assess the system imaging capability. Results: No insensitive gaps at the detector edge and thus it is capable for tiling to a large-scale detector panel. All 64 crystals inside the detector were clearly separated from a flood-source image. Measured energy, timing, and DOI resolutions are around 17%, 2.7 ns and 1.96 mm (mean value). Point source image is acquired successfully without detector/electronics calibration and data correction. Conclusion: Newly developed advanced detector and readout electronics will be enable achieving targeted scalable and compact PET system in stationary configuration with >15% sensitivity, ∼1.3 mm uniform imaging resolution, and fast acquisition counting rate

  13. Compensation of Wave-Induced Motion and Force Phenomena for Ship-Based High Performance Robotic and Human Amplifying Systems

    SciTech Connect

    Love, LJL

    2003-09-24

    The decrease in manpower and increase in material handling needs on many Naval vessels provides the motivation to explore the modeling and control of Naval robotic and robotic assistive devices. This report addresses the design, modeling, control and analysis of position and force controlled robotic systems operating on the deck of a moving ship. First we provide background information that quantifies the motion of the ship, both in terms of frequency and amplitude. We then formulate the motion of the ship in terms of homogeneous transforms. This transformation provides a link between the motion of the ship and the base of a manipulator. We model the kinematics of a manipulator as a serial extension of the ship motion. We then show how to use these transforms to formulate the kinetic and potential energy of a general, multi-degree of freedom manipulator moving on a ship. As a demonstration, we consider two examples: a one degree-of-freedom system experiencing three sea states operating in a plane to verify the methodology and a 3 degree of freedom system experiencing all six degrees of ship motion to illustrate the ease of computation and complexity of the solution. The first series of simulations explore the impact wave motion has on tracking performance of a position controlled robot. We provide a preliminary comparison between conventional linear control and Repetitive Learning Control (RLC) and show how fixed time delay RLC breaks down due to the varying nature wave disturbance frequency. Next, we explore the impact wave motion disturbances have on Human Amplification Technology (HAT). We begin with a description of the traditional HAT control methodology. Simulations show that the motion of the base of the robot, due to ship motion, generates disturbances forces reflected to the operator that significantly degrade the positioning accuracy and resolution at higher sea states. As with position-controlled manipulators, augmenting the control with a Repetitive

  14. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  15. Ultra-high performance mirror systems for the imaging and coherence beamline I13 at the Diamond Light Source

    NASA Astrophysics Data System (ADS)

    Wagner, U. H.; Alcock, S.; Ludbrook, G.; Wiatryzk, J.; Rau, C.

    2012-05-01

    I13L is a 250m long hard x-ray beamline (6 keV to 35 keV) currently under construction at the Diamond Light Source. The beamline comprises of two independent experimental endstations: one for imaging in direct space using x-ray microscopy and one for imaging in reciprocal space using coherent diffraction based imaging techniques. To minimise the impact of thermal fluctuations and vibrations onto the beamline performance, we are developing a new generation of ultra-stable beamline instrumentation with highly repeatable adjustment mechanisms using low thermal expansion materials like granite and large piezo-driven flexure stages. For minimising the beam distortion we use very high quality optical components like large ion-beam polished mirrors. In this paper we present the first metrology results on a newly designed mirror system following this design philosophy.

  16. High performance nuclear thermal propulsion system for near term exploration missions to 100 A.U. and beyond

    NASA Astrophysics Data System (ADS)

    Powell, James R.; Paniagua, John; Maise, George; Ludewig, Hans; Todosow, Michael

    1999-05-01

    A new compact ultra light nuclear reactor engine design termed MITEE (MIniature Reac Tor EnginE) is described. MITEE heats hydrogen propellant to 3000 K, achieving a specific impulse of 1000 seconds and a thrust-to-weight of 10. Total engine mass is 200 kg, including reactor, pump, auxiliaries and a 30% contingency. MITEE enables many types of new and unique missions to the outer solar system not possible with chemical engines. Examples include missions to 100 A.U. in less than 10 years, flybys of Pluto in 5 years, sample return from Pluto and the moons of the outer planets, unlimited ramjet flight in planetary atmospheres, etc. Much of the necessary technology for MITEE already exists as a result of previous nuclear rocket development programs. With some additional development, initial MITEE missions could begin in only 6 years.

  17. High performance liquid level monitoring system based on polymer fiber Bragg gratings embedded in silicone rubber diaphragms

    NASA Astrophysics Data System (ADS)

    Marques, Carlos A. F.; Peng, Gang-Ding; Webb, David J.

    2015-05-01

    Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using

  18. Evolution of high-performance swimming in sharks: transformations of the musculotendinous system from subcarangiform to thunniform swimmers.

    PubMed

    Gemballa, Sven; Konstantinidis, Peter; Donley, Jeanine M; Sepulveda, Chugey; Shadwick, Robert E

    2006-04-01

    In contrast to all other sharks, lamnid sharks perform a specialized fast and continuous "thunniform" type of locomotion, more similar to that of tunas than to any other known shark or bony fish. Within sharks, it has evolved from a subcarangiform mode. Experimental data show that the two swimming modes in sharks differ remarkably in kinematic patterns as well as in muscle activation patterns, but the morphology of the underlying musculotendinous system (red muscles and myosepta) that drives continuous locomotion remains largely unknown. The goal of this study was to identify differences in the musculotendinous system of the two swimming types and to evaluate these differences in an evolutionary context. Three subcarangiform sharks (the velvet belly lantern shark, Etmopterus spinax, the smallspotted catshark, Scyliorhinus canicula, and the blackmouth catshark, Galeus melanostomus) from the two major clades (two galeans, one squalean) and one lamnid shark, the shortfin mako, Isurus oxyrhinchus, were compared with respect to 1) the 3D shape of myomeres and myosepta of different body positions; 2) the tendinous architecture (collagenous fiber pathways) of myosepta from different body positions; and 3) the association of red muscles with myoseptal tendons. Results show that the three subcarangiform sharks are morphologically similar but differ remarkably from the lamnid condition. Moreover, the "subcarangiform" morphology is similar to the condition known from teleostomes. Thus, major features of the "subcarangiform" condition in sharks have evolved early in gnathostome history: Myosepta have one main anterior-pointing cone and two posterior-pointing cones that project into the musculature. Within a single myoseptum cones are connected by longitudinally oriented tendons (the hypaxial and epaxial lateral and myorhabdoid tendons). Mediolaterally oriented tendons (epineural and epipleural tendons; mediolateral fibers) connect vertebral axis and skin. An individual lateral

  19. High performance seizure-monitoring system using a vibration sensor and videotape recording: behavioral analysis of genetically epileptic rats.

    PubMed

    Amano, S; Yokoyama, M; Torii, R; Fukuoka, J; Tanaka, K; Ihara, N; Hazama, F

    1997-06-01

    A new seizure-monitoring apparatus containing a piezoceramic vibration sensor combined with videotape recording was developed. Behavioral analysis of Ihara's genetically epileptic rat (IGER), which is a recently developed novel mutant with spontaneously limbic-like seizures, was performed using this new device. Twenty 8-month-old male IGERs were monitored continuously for 72 h. Abnormal behaviors were detected by use of a vibration recorder, and epileptic seizures were confirmed by videotape recordings taken synchronously with vibration recording. Representative forms of seizures were generalized convulsions and circling seizures. Generalized convulsions were found in 13 rats, and circling seizures in 7 of 20 animals. Two rats had generalized and circling seizures, and two rats did not have seizures. Although there was no apparent circadian rhythm to the generalized seizures, circling seizures occurred mostly between 1800 and 0800 h. A correlation between the sleep-wake cycle and the occurrence of circling seizures seems likely. Without exception, all the seizure actions were recorded by the vibration recorder and the videotape recorder. To eliminate the risk of a false-negative result, investigators scrutinized the information obtained from the vibration sensor and the videotape recorder. The newly developed seizure-monitoring system was found to facilitate detailed analysis of epileptic seizures in rats.

  20. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF). Volume 1, Final report

    SciTech Connect

    1996-02-01

    A major objective of the coal-fired high performance power systems (HIPPS) program is to achieve significant increases in the thermodynamic efficiency of coal use for electric power generation. Through increased efficiency, all airborne emissions can be decreased, including emissions of carbon dioxide. High Performance power systems as defined for this program are coal-fired, high efficiency systems where the combustion products from coal do not contact the gas turbine. Typically, this type of a system will involve some indirect heating of gas turbine inlet air and then topping combustion with a cleaner fuel. The topping combustion fuel can be natural gas or another relatively clean fuel. Fuel gas derived from coal is an acceptable fuel for the topping combustion. The ultimate goal for HIPPS is to, have a system that has 95 percent of its heat input from coal. Interim systems that have at least 65 percent heat input from coal are acceptable, but these systems are required to have a clear development path to a system that is 95 percent coal-fired. A three phase program has been planned for the development of HIPPS. Phase 1, reported herein, includes the development of a conceptual design for a commercial plant. Technical and economic feasibility have been analysed for this plant. Preliminary R&D on some aspects of the system were also done in Phase 1, and a Research, Development and Test plan was developed for Phase 2. Work in Phase 2 include s the testing and analysis that is required to develop the technology base for a prototype plant. This work includes pilot plant testing at a scale of around 50 MMBtu/hr heat input. The culmination of the Phase 2 effort will be a site-specific design and test plan for a prototype plant. Phase 3 is the construction and testing of this plant.

  1. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  2. A high-performance polycarbonate electrophoresis microchip with integrated three-electrode system for end-channel amperometric detection.

    PubMed

    Wang, Yurong; Chen, Hengwu; He, Qiaohong; Soper, Steven A

    2008-05-01

    A fully integrated polycarbonate (PC) microchip for CE with end-channel electrochemical detection operated in an amperometric mode (CE-ED) has been developed. The on-chip integrated three-electrode system consisted of a gold working electrode, an Ag/AgCl reference electrode and a platinum counter electrode, which was fabricated by photo-directed electroless plating combined with electroplating. The working electrode was positioned against the separation channel exit to reduce post-channel band broadening. The electrophoresis high-voltage (HV) interference with the amperometric detection was assessed with respect to detection noise and potential shifts at various working-to-reference electrode spacing. It was observed that the electrophoresis HV interference caused by positioning the working electrode against the channel exit could be diminished by using an on-chip integrated reference electrode that was positioned in close proximity (100 microm) to the working electrode. The CE-ED microchip was demonstrated for the separation of model analytes, including dopamine (DA) and catechol (CA). Detection limits of 132 and 164 nM were achieved for DA and CA, respectively, and a theoretical plate number of 2.5x10(4)/m was obtained for DA. Relative standard deviations in peak heights observed for five runs of a standard solution containing the two analytes (0.1 mM for each) were 1.2 and 3.1% for DA and CA, respectively. The chip could be continuously used for more than 8 h without significant deterioration in analytical performance.

  3. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  4. Simultaneous determination of nicotine and cotinine in serum using high-performance liquid chromatography with fluorometric detection and postcolumn UV-photoirradiation system.

    PubMed

    Yasuda, Makoto; Ota, Tatsuhiro; Morikawa, Atsushi; Mawatari, Ken-ichi; Fukuuchi, Tomoko; Yamaoka, Noriko; Kaneko, Kiyoko; Nakagomi, Kazuya

    2013-09-01

    A simple and rapid method for the simultaneous determination of serum nicotine and cotinine using high-performance liquid chromatography (HPLC)-fluorometric detection with a postcolumn ultraviolet-photoirradiation system was developed. Analytes were extracted from alkalinized human serum via liquid-liquid extraction using chloroform. The organic phase was back-extracted with the acidified aqueous phase, and the analytes were directly injected into an ion-pair reversed-phase HPLC system. 6-Aminoquinoline was used as an internal standard. Nicotine, cotinine, and 6-aminoquinoline were separated within 14min. The extraction efficiency of nicotine and cotinine was greater than 91%. The linear range was 0.30-1000ng for nicotine and 0.06-1000ng for cotinine. In serum samples from smokers, the concentrations of nicotine and cotinine were 8-15ng/mL and 156-372ng/mL, respectively.

  5. Integration of tools for the design and assessment of high-performance, highly reliable computing systems (DAHPHRS). Final report, Jun 89-Sep 90

    SciTech Connect

    Scheper, C.O.; Baker, R.L.; Waters, H.L.

    1991-12-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the system engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report will describe an investigation which examined methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercube, the Encore Multimac, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  6. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  7. Developing collective customer knowledge and service climate: The interaction between service-oriented high-performance work systems and service leadership.

    PubMed

    Jiang, Kaifeng; Chuang, Chih-Hsun; Chiao, Yu-Ching

    2015-07-01

    This study theorized and examined the influence of the interaction between Service-Oriented high-performance work systems (HPWSs) and service leadership on collective customer knowledge and service climate. Using a sample of 569 employees and 142 managers in footwear retail stores, we found that Service-Oriented HPWSs and service leadership reduced the influences of one another on collective customer knowledge and service climate, such that the positive influence of service leadership on collective customer knowledge and service climate was stronger when Service-Oriented HPWSs were lower than when they were higher or the positive influence of Service-Oriented HPWSs on collective customer knowledge and service climate was stronger when service leadership was lower than when it was higher. We further proposed and found that collective customer knowledge and service climate were positively related to objective financial outcomes through service performance. Implications for the literature and managerial practices are discussed.

  8. Development of a temperature-compensated hot-film anemometer system for boundary-layer transition detection on high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Chiles, H. R.; Johnson, J. B.

    1985-01-01

    A hot-film constant-temperature anemometer (CTA) system was flight-tested and evaluated as a candidate sensor for determining boundary-layer transition on high-performance aircraft. The hot-film gage withstood an extreme flow environment characterized by shock waves and high dynamic pressures, although sensitivity to the local total temperature with the CTA indicated the need for some form of temperature compensation. A temperature-compensation scheme was developed and two CTAs were modified and flight-tested on the F-104/Flight Test Fixture (FTF) facility at a variety of Mach numbers and altitudes, ranging from 0.4 to 1.8 and 5,000 to 40,000 ft respectively.

  9. Determination of Sunset Yellow and Tartrazine in Food Samples by Combining Ionic Liquid-Based Aqueous Two-Phase System with High Performance Liquid Chromatography

    PubMed Central

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01–50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method. PMID:25538857

  10. Determination of sunset yellow and tartrazine in food samples by combining ionic liquid-based aqueous two-phase system with high performance liquid chromatography.

    PubMed

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01-50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method.

  11. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  12. High Performance Computing Multicast

    DTIC Science & Technology

    2012-02-01

    conditions are thus met for a myriad of critical national-asset applications that are likely to move to the cloud in the next decade. In the context of this...system conditions are thus met for a myriad of critical national-asset applications that are likely to move to the cloud in the next decade. In the...for that key. Shard mappings change as nodes join and leave the ring, and data is moved around accordingly (a form of state transfer). Coordination

  13. High performance collectors

    NASA Astrophysics Data System (ADS)

    Ogawa, H.; Hozumi, S.; Mitsumata, T.; Yoshino, K.; Aso, S.; Ebisu, K.

    1983-04-01

    Materials and structures used for flat plate solar collectors and evacuated tubular collectors were examined relative to their overall performance to project effectiveness for building heating and cooling and the feasibility of use for generating industrial process heat. Thermal efficiencies were calculated for black paint single glazed, selective surface single glazed, and selective surface double glazed flat plate collectors. The efficiencies of a single tube and central tube accompanied by two side tube collectors were also studied. Techniques for extending the lifetimes of the collectors were defined. The selective surface collectors proved to have a performance superior to other collectors in terms of the average annual energy delivered. Addition of a black chrome-coated fin system to the evacuated collectors produced significant collection efficiency increases.

  14. Development of a high-performance, coal-fired power generating system with a pyrolysis gas and char-fired high-temperature furnace

    SciTech Connect

    Shenker, J.

    1995-11-01

    A high-performance power system (HIPPS) is being developed. This system is a coal-fired, combined-cycle plant that will have an efficiency of at least 47 percent, based on the higher heating value of the fuel. The original emissions goal of the project was for NOx and SOx to each be below 0.15 lb/MMBtu. In the Phase 2 RFP this emissions goal was reduced to 0.06 lb/MMBtu. The ultimate goal of HIPPS is to have an all-coal-fueled system, but initial versions of the system are allowed up to 35 percent heat input from natural gas. Foster Wheeler Development Corporation is currently leading a team effort with AlliedSignal, Bechtel, Foster Wheeler Energy Corporation, Research-Cottrell, TRW and Westinghouse. Previous work on the project was also done by General Electric. The HIPPS plant will use a high-Temperature Advanced Furnace (HITAF) to achieve combined-cycle operation with coal as the primary fuel. The HITAF is an atmospheric-pressure, pulverized-fuel-fired boiler/air heater. The HITAF is used to heat air for the gas turbine and also to transfer heat to the steam cycle. its design and functions are very similar to conventional PC boilers. Some important differences, however, arise from the requirements of the combined cycle operation.

  15. High Performance Medical Classifiers

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Bekakos, M. P.

    2009-08-01

    In this paper, parallelism methodologies for the mapping of machine learning algorithms derived rules on both software and hardware are investigated. Feeding the input of these algorithms with patient diseases data, medical diagnostic decision trees and their corresponding rules are outputted. These rules can be mapped on multithreaded object oriented programs and hardware chips. The programs can simulate the working of the chips and can exhibit the inherent parallelism of the chips design. The circuit of a chip can consist of many blocks, which are operating concurrently for various parts of the whole circuit. Threads and inter-thread communication can be used to simulate the blocks of the chips and the combination of block output signals. The chips and the corresponding parallel programs constitute medical classifiers, which can classify new patient instances. Measures taken from the patients can be fed both into chips and parallel programs and can be recognized according to the classification rules incorporated in the chips and the programs design. The chips and the programs constitute medical decision support systems and can be incorporated into portable micro devices, assisting physicians in their everyday diagnostic practice.

  16. High Performance Flexible Thermal Link

    NASA Astrophysics Data System (ADS)

    Sauer, Arne; Preller, Fabian

    2014-06-01

    The paper deals with the design and performance verification of a high performance and flexible carbon fibre thermal link.Project goal was to design a space qualified thermal link combining low mass, flexibility and high thermal conductivity with new approaches regarding selected materials and processes. The idea was to combine the advantages of existing metallic links regarding flexibility and the thermal performance of high conductive carbon pitch fibres. Special focus is laid on the thermal performance improvement of matrix systems by means of nano-scaled carbon materials in order to improve the thermal performance also perpendicular to the direction of the unidirectional fibres.One of the main challenges was to establish a manufacturing process which allows handling the stiff and brittle fibres, applying the matrix and performing the implementation into an interface component using unconventional process steps like thermal bonding of fibres after metallisation.This research was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi).

  17. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology

    PubMed Central

    Foran, David J; Yang, Lin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    Objective and design The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. Results The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples. PMID:21606133

  18. Optimization and Assessment of Three Different High Performance Liquid Chromatographic Systems for the Combinative Fingerprint Analysis and Multi-Ingredients Quantification of Sangju Ganmao Tablet.

    PubMed

    Guo, Meng-Zhe; Han, Jie; He, Dan-Dan; Zou, Jia-Hui; Li, Zheng; Du, Yan; Tang, Dao-Quan

    2017-03-01

    Chromatographic separation is still a critical subject for the quality control of traditional Chinese medicine. In this study, three different high performance liquid chromatographic (HPLC) systems employing commercially available columns packed with 1.8, 3.5 and 5.0 μm particles were respectively developed and optimized for the combinative fingerprint analysis and multi-ingredients quantification of Sangju Ganmao tablet (SGT). Chromatographic parameters including the repeatability of retention time and peak area, symmetry factor, resolution, number of theoretical plates and peak capacity were used to assess the chromatographic performance of different HPLC systems. The optimal chromatographic system using Agilent ZORBAX SB-C18 column (2.1 mm × 100 mm, 3.5 μm) as stationary phase was respectively coupled with diode array detector or mass spectrometry detector for the chromatographic fingerprint analysis and simultaneous quantification or identification of nine compounds of SGT. All the validation data conformed to the acceptable requirements. For the fingerprint analysis, 31 peaks were selected as the common peaks to evaluate the similarities of SGT from 10 different manufacturers using heatmap, hierarchical cluster analysis and principal component analysis. The results demonstrated that the combinations of the quantitative and chromatographic fingerprint analysis offer an efficient way to evaluate the quality consistency of SGT.

  19. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  20. Transforming Regions into High-Performing Health Systems Toward the Triple Aim of Better Health, Better Care and Better Value for Canadians.

    PubMed

    Bergevin, Yves; Habib, Bettina; Elicksen-Jensen, Keesa; Samis, Stephen; Rochon, Jean; Denis, Jean-Louis; Roy, Denis

    2016-01-01

    A study on the impact of regionalization on the Triple Aim of Better Health, Better Care and Better Value across Canada in 2015 identified major findings including: (a) with regard to the Triple Aim, the Canadian situation is better than before but variable and partial, and Canada continues to underperform compared with other industrialized countries, especially in primary healthcare where it matters most; (b) provinces are converging toward a two-level health system (provincial/regional); (c) optimal size of regions is probably around 350,000-500,000 population; d) citizen and physician engagement remains weak. A realistic and attainable vision for high-performing regional health systems is presented together with a way forward, including seven areas for improvement: 1. Manage the integrated regionalized health systems as results-driven health programs; 2. Strengthen wellness promotion, public health and intersectoral action for health; 3. Ensure timely access to personalized primary healthcare/family health and to proximity services; 4. Involve physicians in clinical governance and leadership, and partner with them in accountability for results including the required changes in physician remuneration; 5. Engage citizens in shaping their own health destiny and their health system; 6. Strengthen health information systems, accelerate the deployment of electronic health records and ensure their interoperability with health information systems; 7. Foster a culture of excellence and continuous quality improvement. We propose a turning point for Canada, from Paradigm Freeze to Paradigm Shift: from hospital-centric episodic care toward evidence-informed population-based primary and community care with modern family health teams, ensuring integrated and coordinated care along the continuum, especially for high users. We suggest goals and targets for 2020 and time-bound federal/provincial/regional working groups toward reaching the identified goals and targets and placing

  1. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  2. Determination of histamine in wines with an on-line pre-column flow derivatization system coupled to high performance liquid chromatography.

    PubMed

    García-Villar, Natividad; Saurina, Javier; Hernández-Cassou, Santiago

    2005-09-01

    A new rapid and sensitive high performance liquid chromatography (HPLC) method for determining histamine in red wine samples, based on continuous flow derivatization with 1,2-naphthoquinone-4-sulfonate (NQS), is proposed. In this system, samples are derivatized on-line in a three-channel flow manifold for reagent, buffer and sample. The reaction takes place in a PTFE coil heated at 80 degrees C and with a residence time of 2.9 min. The reaction mixture is injected directly into the chromatographic system, where the histamine derivative is separated from other aminated compounds present in the wine matrix in less than ten minutes. The HPLC procedure involves a C18 column, a binary gradient of 2% acetic acid-methanol as a mobile phase, and UV detection at 305 nm. Analytical parameters of the method are evaluated using red wine samples. The linear range is up to 66.7 mg L(-1) (r = 0.9999), the precision (RSD) is 3%, the detection limit is 0.22 mg L(-1), and the average histamine recovery is 101.5% +/- 6.7%. Commercial red wines from different Spanish regions are analyzed with the proposed method.

  3. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis

    PubMed Central

    Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems. PMID:28386547

  4. Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung

    2012-03-01

    Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare.

  5. High-performance low-noise 128-channel readout-integrated circuit for flat-panel x-ray detector systems

    NASA Astrophysics Data System (ADS)

    Beuville, Eric J.; Belding, Mark; Costello, Adrienne N.; Hansen, Randy; Petronio, Susan M.

    2004-05-01

    A silicon mixed-signal integrated circuit is needed to extract and process x-ray induced signals from a coated flat panel thin film transistor array (TFT) in order to generate a digital x-ray image. Indigo Systems Corporation has designed, fabricated, and tested such a readout integrated circuit (ROIC), the ISC9717. This off-the-shelf, high performance, low-noise, 128-channel device is fully programmable with a multistage pipelined architecture and a 9 to 14-bit programmable A/D converter per channel, making it suitable for numerous X-ray medical imaging applications. These include high-resolution radiography in single frame mode and fluoroscopy where high frame rates are required. The ISC9717 can be used with various flat panel arrays and solid-state detectors materials: Selenium (Se), Cesium Iodide (CsI), Silicon (Si), Amorphous Silicon, Gallium Arsenide (GaAs), and Cadmium Zinc Telluride (CdZnTe). The 80-micron pitch ROIC is designed to interface (wire bonding or flip-chip) along one or two sides of the x-ray panel, where ROICs are abutted vertically, each reading out charge from pixels multiplexed onto 128 horizontal read lines. The paper will present the design and test results of the ROIC, including the mechanical and electrical interface to a TFT array, system performance requirements, output multiplexing of the digital signals to an off-board processor, and characterization test results from fabricated arrays.

  6. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  7. Monoclonal antibody heterogeneity analysis and deamidation monitoring with high-performance cation-exchange chromatofocusing using simple, two component buffer systems.

    PubMed

    Kang, Xuezhen; Kutzko, Joseph P; Hayes, Michael L; Frey, Douglas D

    2013-03-29

    The use of either a polyampholyte buffer or a simple buffer system for the high-performance cation-exchange chromatofocusing of monoclonal antibodies is demonstrated for the case where the pH gradient is produced entirely inside the column and with no external mixing of buffers. The simple buffer system used was composed of two buffering species, one which becomes adsorbed onto the column packing and one which does not adsorb, together with an adsorbed ion that does not participate in acid-base equilibrium. The method which employs the simple buffer system is capable of producing a gradual pH gradient in the neutral to acidic pH range that can be adjusted by proper selection of the starting and ending pH values for the gradient as well as the buffering species concentration, pKa, and molecular size. By using this approach, variants of representative monoclonal antibodies with isoelectric points of 7.0 or less were separated with high resolution so that the approach can serve as a complementary alternative to isoelectric focusing for characterizing a monoclonal antibody based on differences in the isoelectric points of the variants present. Because the simple buffer system used eliminates the use of polyampholytes, the method is suitable for antibody heterogeneity analysis coupled with mass spectrometry. The method can also be used at the preparative scale to collect highly purified isoelectric variants of an antibody for further study. To illustrate this, a single isoelectric point variant of a monoclonal antibody was collected and used for a stability study under forced deamidation conditions.

  8. High-performance intraoperative cone-beam CT on a mobile C-arm: an integrated system for guidance of head and neck surgery

    NASA Astrophysics Data System (ADS)

    Siewerdsen, J. H.; Daly, M. J.; Chan, H.; Nithiananthan, S.; Hamming, N.; Brock, K. K.; Irish, J. C.

    2009-02-01

    A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video. Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such systems in complex OR environments.

  9. Laser videofluorometer system for real-time characterization of high-performance liquid chromatographic eluate. [3-hydroxy-benzo(a)pyrene

    SciTech Connect

    Skoropinski, D.B.; Callis, J.B.; Danielson, J.D.S.; Christian, G.D.

    1986-11-01

    A second generation videofluorometer has been developed for real-time characterization of high-performance liquid chromatographic eluate. The instrument features a nitrogen-laser-pumped dye laser as excitation source and quarter meter polychromator/microchannel plate-intensified diode array as fluorescence detector. The dye laser cavity is tuned with a moving-iron galvanometer scanner grating drive, permitting the laser output to be changed to any wavelength in its range in less than 40 ms. Thus, the optimum excitation wavelength can be chosen for each chromatographic region. A minimum detection limit of 13 pptr has been obtained for 3-hydroxy-benzo(a)pyrene in a conventional fluorescence cuvette with a 30-s data acquisition. For the same substance eluted chromatographically, a minimum detection limit of 50 pg has been obtained, and a linear dynamic range of greater than 3 orders of magnitude observed. An extract of soil that had been contaminated with polyaromatic hydrocarbons was analyzed as a practical test of the system, permitting the quantitation of three known species, and the identification and quantitation of a previously unknown fourth compound.

  10. Final Assessment of Preindustrial Solid-State Route for High-Performance Mg-System Alloys Production: Concluding the EU Green Metallurgy Project

    NASA Astrophysics Data System (ADS)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Giger, Franz; Kim, Shae K.

    2013-10-01

    The Green Metallurgy Project, a LIFE+ project co-financed by the European Union Commission, has now been completed. The purpose of the Green Metallurgy Project was to establish and assess a preindustrial process capable of using nanostructured-based high-performance Mg-Zn(Y) magnesium alloys and fully recycled eco-magnesium alloys. In this work, the Consortium presents the final outcome and verification of the completed prototype construction. To compare upstream cradle-to-grave footprints when ternary nanostructured Mg-Y-Zn alloys or recycled eco-magnesium chips are produced during the process cycle using the same equipment, a life cycle analysis was completed following the ISO 14040 methodology. During tests to fine tune the prototype machinery and compare the quality of semifinished bars produced using the scaled up system, the Buhler team produced interesting and significant results. Their tests showed the ternary Mg-Y-Zn magnesium alloys to have a highest specific strength over 6000 series wrought aluminum alloys usually employed in automotive components.

  11. Engineering development of coal-fired high performance power systems, Phases 2 and 3. Quarterly progress report, October 1--December 31, 1996. Final report

    SciTech Connect

    1996-12-31

    The goals of this program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: {gt} 47% efficiency (HHV); NO{sub x}, SO{sub x}, and particulates {gt} 10% NSPS; coal providing {ge} 65% of heat input; all sold wastes benign; and cost of electricity 90% of present plant. Work reported herein is from Task 1.3 HIPPS Commercial Plant Design, Task 2,2 HITAF Air Heater, and Task 2.4 Duct Heater Design. The impact on cycle efficiency from the integration of various technology advances is presented. The criteria associated with a commercial HIPPS plant design as well as possible environmental control options are presented. The design of the HITAF air heaters, both radiative and convective, is the most critical task in the program. In this report, a summary of the effort associated with the radiative air heater designs that have been considered is provided. The primary testing of the air heater design will be carried out in the UND/EERC pilot-scale furnace; progress to date on the design and construction of the furnace is a major part of this report. The results of laboratory and bench scale activities associated with defining slag properties are presented. Correct material selection is critical for the success of the concept; the materials, both ceramic and metallic, being considered for radiant air heater are presented. The activities associated with the duct heater are also presented.

  12. Do they see eye to eye? Management and employee perspectives of high-performance work systems and influence processes on service quality.

    PubMed

    Liao, Hui; Toya, Keiko; Lepak, David P; Hong, Ying

    2009-03-01

    Extant research on high-performance work systems (HPWSs) has primarily examined the effects of HPWSs on establishment or firm-level performance from a management perspective in manufacturing settings. The current study extends this literature by differentiating management and employee perspectives of HPWSs and examining how the two perspectives relate to employee individual performance in the service context. Data collected in three phases from multiple sources involving 292 managers, 830 employees, and 1,772 customers of 91 bank branches revealed significant differences between management and employee perspectives of HPWSs. There were also significant differences in employee perspectives of HPWSs among employees of different employment statuses and among employees of the same status. Further, employee perspective of HPWSs was positively related to individual general service performance through the mediation of employee human capital and perceived organizational support and was positively related to individual knowledge-intensive service performance through the mediation of employee human capital and psychological empowerment. At the same time, management perspective of HPWSs was related to employee human capital and both types of service performance. Finally, a branch's overall knowledge-intensive service performance was positively associated with customer overall satisfaction with the branch's service.

  13. A meta-analysis of country differences in the high-performance work system-business performance relationship: the roles of national culture and managerial discretion.

    PubMed

    Rabl, Tanja; Jayasinghe, Mevan; Gerhart, Barry; Kühlmann, Torsten M

    2014-11-01

    Our article develops a conceptual framework based primarily on national culture perspectives but also incorporating the role of managerial discretion (cultural tightness-looseness, institutional flexibility), which is aimed at achieving a better understanding of how the effectiveness of high-performance work systems (HPWSs) may vary across countries. Based on a meta-analysis of 156 HPWS-business performance effect sizes from 35,767 firms and establishments in 29 countries, we found that the mean HPWS-business performance effect size was positive overall (corrected r = .28) and positive in each country, regardless of its national culture or degree of institutional flexibility. In the case of national culture, the HPWS-business performance relationship was, on average, actually more strongly positive in countries where the degree of a priori hypothesized consistency or fit between an HPWS and national culture (according to national culture perspectives) was lower, except in the case of tight national cultures, where greater a priori fit of an HPWS with national culture was associated with a more positive HPWS-business performance effect size. However, in loose cultures (and in cultures that were neither tight nor loose), less a priori hypothesized consistency between an HPWS and national culture was associated with higher HPWS effectiveness. As such, our findings suggest the importance of not only national culture but also managerial discretion in understanding the HPWS-business performance relationship. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  14. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    PubMed

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance.

  15. Magnetic ionic liquid aqueous two-phase system coupled with high performance liquid chromatography: A rapid approach for determination of chloramphenicol in water environment.

    PubMed

    Yao, Tian; Yao, Shun

    2017-01-20

    A novel organic magnetic ionic liquid based on guanidinium cation was synthesized and characterized. A new method of magnetic ionic liquid aqueous two-phase system (MILATPs) coupled with high-performance liquid chromatography (HPLC) was established to preconcentrate and determine trace amount of chloramphenicol (CAP) in water environment for the first time. In the absence of volatile organic solvents, MILATPs not only has the excellent properties of rapid extraction, but also exhibits a response to an external magnetic field which can be applied to assist phase separation. The phase behavior of MILATPs was investigated and phase equilibrium data were correlated by Merchuk equation. Various influencing factors on CAP recovery were systematically investigated and optimized. Under the optimal conditions, the preconcentration factor was 147.2 with the precision values (RSD%) of 2.42% and 4.45% for intra-day (n=6) and inter-day (n=6), respectively. The limit of detection (LOD) and limit of quantitation (LOQ) were 0.14ngmL(-1) and 0.42ngmL(-1), respectively. Fine linear range of 12.25ngmL(-1)-2200ngmL(-1) was obtained. Finally, the validated method was successfully applied for the analysis of CAP in some environmental waters with the recoveries for the spiked samples in the acceptable range of 94.6%-99.72%. Hopefully, MILATPs is showing great potential to promote new development in the field of extraction, separation and pretreatment of various biochemical samples.

  16. Design and implementation of an automated liquid-phase microextraction-chip system coupled on-line with high performance liquid chromatography.

    PubMed

    Li, Bin; Petersen, Nickolaj Jacob; Payán, María D Ramos; Hansen, Steen Honoré; Pedersen-Bjergaard, Stig

    2014-03-01

    An automated liquid-phase microextraction (LPME) device in a chip format has been developed and coupled directly to high performance liquid chromatography (HPLC). A 10-port 2-position switching valve was used to hyphenate the LPME-chip with the HPLC autosampler, and to collect the extracted analytes, which then were delivered to the HPLC column. The LPME-chip-HPLC system was completely automated and controlled by the software of the HPLC instrument. The performance of this system was demonstrated with five alkaloids i.e. morphine, codeine, thebaine, papaverine, and noscapine as model analytes. The composition of the supported liquid membrane (SLM) and carrier was optimized in order to achieve reasonable extraction performance of all the five alkaloids. With 1-octanol as SLM solvent and with 25 mM sodium octanoate as anionic carrier, extraction recoveries for the different opium alkaloids ranged between 17% and 45%. The extraction provided high selectivity, and no interfering peaks in the chromatograms were observed when applied to human urine samples spiked with alkaloids. The detection limits using UV-detection were in the range of 1-21 ng/mL for the five opium alkaloids presented in water samples. The repeatability was within 5.0-10.8% (RSD). The membrane liquid in the LPME-chip was regenerated automatically between every third injection. With this procedure the liquid membrane in the LPME-chip was stable in 3-7 days depending on the complexity of sample solutions with continuous operation. With this LPME-chip-HPLC system, series of samples were automatically injected, extracted, separated, and detected without any operator interaction.

  17. Multilayer high performance insulation materials

    NASA Technical Reports Server (NTRS)

    Stuckey, J. M.

    1971-01-01

    A number of tests are required to evaluate both multilayer high performance insulation samples and the materials that comprise them. Some of the techniques and tests being employed for these evaluations and some of the results obtained from thermal conductivity tests, outgassing studies, effect of pressure on layer density tests, hypervelocity impact tests, and a multilayer high performance insulation ambient storage program at the Kennedy Space Center are presented.

  18. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  19. Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation

    NASA Astrophysics Data System (ADS)

    Uneri, Ali; Schafer, Sebastian; Mirota, Daniel; Nithiananthan, Sajendra; Otake, Yoshito; Reaungamornrat, Sureerat; Yoo, Jongheun; Stayman, J. Webster; Reh, Douglas; Gallia, Gary L.; Khanna, A. Jay; Hager, Gregory; Taylor, Russell H.; Kleinszig, Gerhard; Siewerdsen, Jeffrey H.

    2011-03-01

    the development of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.

  20. PEGylated hybrid ytterbia nanoparticles as high-performance diagnostic probes for in vivo magnetic resonance and X-ray computed tomography imaging with low systemic toxicity

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Pu, Fang; Liu, Jianhua; Jiang, Liyan; Yuan, Qinghai; Li, Zhengqiang; Ren, Jinsong; Qu, Xiaogang

    2013-05-01

    Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material. Compared with routinely used Iobitridol in clinic, our PEG-Yb2O3:Gd nanoparticles could provide much significantly enhanced contrast upon various clinical voltages ranging from 80 kVp to 140 kVp owing to the high atomic number and well-positioned K-edge energy of ytterbium. By the doping of gadolinium, our nanoparticulate contrast agent could perform perfect MR imaging simultaneously, revealing similar organ enrichment and bio-distribution with the CT imaging results. The super improvement in imaging efficiency was mainly attributed to the high content of Yb and Gd in a single nanoparticle, thus making these nanoparticles suitable for dual-modal diagnostic imaging with a low single-injection dose. In addition, detailed toxicological study in vitro and in vivo indicated that uniformly sized PEG-Yb2O3:Gd nanoparticles possessed excellent biocompatibility and revealed overall safety.Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material

  1. Parallel implementation of inverse adding-doubling and Monte Carlo multi-layered programs for high performance computing systems with shared and distributed memory

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav; Li, Changying

    2015-09-01

    Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.

  2. Prospective Randomized Controlled Study on the Efficacy of Multimedia Informed Consent for Patients Scheduled to Undergo Green-Light High-Performance System Photoselective Vaporization of the Prostate

    PubMed Central

    Ham, Dong Yeub; Choi, Woo Suk; Song, Sang Hoon; Ahn, Young-Joon; Park, Hyoung Keun; Kim, Hyeong Gon

    2016-01-01

    Purpose The aim of this study was to evaluate the efficacy of a multimedia informed consent (IC) presentation on the understanding and satisfaction of patients who were scheduled to receive 120-W green-light high-performance system photoselective vaporization of the prostate (HPS-PVP). Materials and Methods A multimedia IC (M-IC) presentation for HPS-PVP was developed. Forty men with benign prostatic hyperplasia who were scheduled to undergo HPS-PVP were prospectively randomized to a conventional written IC group (W-IC group, n=20) or the M-IC group (n=20). The allocated IC was obtained by one certified urologist, followed by a 15-question test (maximum score, 15) to evaluate objective understanding, and questionnaires on subjective understanding (range, 0~10) and satisfaction (range, 0~10) using a visual analogue scale. Results Demographic characteristics, including age and the highest level of education, did not significantly differ between the two groups. No significant differences were found in scores reflecting the objective understanding of HPS-PVP (9.9±2.3 vs. 10.6±2.8, p=0.332) or in subjective understanding scores (7.5±2.1 vs. 8.6±1.7, p=0.122); however, the M-IC group showed higher satisfaction scores than the W-IC group (7.4±1.7 vs. 8.4±1.5, p=0.033). After adjusting for age and educational level, the M-IC group still had significantly higher satisfaction scores. Conclusions M-IC did not enhance the objective knowledge of patients regarding this surgical procedure. However, it improved the satisfaction of patients with the IC process itself. PMID:27169129

  3. A Systems Approach to High Performance Buildings: A Computational Systems Engineering R&D Program to Increase DoD Energy Efficiency

    DTIC Science & Technology

    2012-02-01

    for Low Energy Building Ventilation and Space Conditioning Systems...Building Energy Models ................... 162 APPENDIX D: Reduced-Order Modeling and Control Design for Low Energy Building Systems .... 172 D.1...Design for Low Energy Building Ventilation and Space Conditioning Systems This section focuses on the modeling and control of airflow in buildings

  4. Teacher Accountability at High Performing Charter Schools

    ERIC Educational Resources Information Center

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  5. High performance computing at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  6. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  7. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  8. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  9. High-performance computing and communications

    SciTech Connect

    Stevens, R.

    1993-11-01

    This presentation has two parts. The first part discusses the US High-Performance Computing and Communications program -- its goals, funding, process, revisions, and research in high-performance computing systems, advanced software technology, and basic research and human resources. The second part of the presentation covers specific work conducted under this program at Argonne National Laboratory. Argonne`s efforts focus on computational science research, software tool development, and evaluation of experimental computer architectures. In addition, the author describes collaborative activities at Argonne in high-performance computing, including an Argonne/IBM project to evaluate and test IBM`s newest parallel computers and the Scalable I/O Initiative being spearheaded by the Concurrent Supercomputing Consortium.

  10. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  11. High performance dielectric materials development

    NASA Astrophysics Data System (ADS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-09-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  12. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  13. High-performance sports medicine.

    PubMed

    Speed, Cathy

    2013-02-01

    High performance sports medicine involves the medical care of athletes, who are extraordinary individuals and who are exposed to intensive physical and psychological stresses during training and competition. The physician has a broad remit and acts as a 'medical guardian' to optimise health while minimising risks. This review describes this interesting field of medicine, its unique challenges and priorities for the physician in delivering best healthcare.

  14. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  15. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  16. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  17. Massive Contingency Analysis with High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Chen, Yousu; Nieplocha, Jaroslaw

    2009-07-26

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimates. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. Faster analysis of more cases is required to safely and reliably operate today’s power grids with less marginal and more intermittent renewable energy sources. Enabled by the latest development in the computer industry, high performance computing holds the promise of meet the need in the power industry. This paper investigates the potential of high performance computing for massive contingency analysis. The framework of "N-x" contingency analysis is established and computational load balancing schemes are studied and implemented with high performance computers. Case studies of massive 300,000-contingency-case analysis using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing and demonstrate the performance of the framework and computational load balancing schemes.

  18. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  19. Ultra-Sensitive Elemental Analysis Using Plasmas 5.Speciation of Arsenic Compounds in Biological Samples by High Performance Liquid Chromatography-Inductively Coupled Plasma Mass Spectrometry System

    NASA Astrophysics Data System (ADS)

    Kaise, Toshikazu

    Arsenic originating from the lithosphere is widely distributed in the environment. Many arsenicals in the environment are in organic and methylated species. These arsenic compounds in drinking water or food products of marine origin are absorbed in human digestive tracts, metabolized in the human body, and excreted viatheurine. Because arsenic shows varying biological a spects depending on its chemical species, the biological characteristics of arsenic must be determined. It is thought that some metabolic pathways for arsenic and some arsenic circulation exist in aqueous ecosystems. In this paper, the current status of the speciation analysis of arsenic by HPLC/ICP-MS (High Performance Liquid Chromatography-Inductively Coupled Plasma Mass spectrometry) in environmental and biological samples is summarized using recent data.

  20. High performance pyroelectric infrared detector

    NASA Astrophysics Data System (ADS)

    Hu, Xu; Luo, Haosu; Ji, Yulong; Yang, Chunli

    2015-10-01

    Single infrared detector made with Relaxative ferroelectric crystal(PMNT) present excellence performance. In this paper include detector capacitance, characteristic of frequency--response, characteristic of detectivity. The measure result show that detectivity of detector made with relaxative ferroelectric crystal(PMNT) exceed three times than made with LT, the D*achieved than 1*109cmHz0.5W-1. The detector will be applied on NDIR spectrograph, FFT spectrograph and so on. The high performance pyroelectric infrared detector be developed that will be broadened application area of infrared detector.

  1. High-performance permanent magnets.

    PubMed

    Goll, D; Kronmüller, H

    2000-10-01

    High-performance permanent magnets (pms) are based on compounds with outstanding intrinsic magnetic properties as well as on optimized microstructures and alloy compositions. The most powerful pm materials at present are RE-TM intermetallic alloys which derive their exceptional magnetic properties from the favourable combination of rare earth metals (RE = Nd, Pr, Sm) with transition metals (TM = Fe, Co), in particular magnets based on (Nd.Pr)2Fe14B and Sm2(Co,Cu,Fe,Zr)17. Their development during the last 20 years has involved a dramatic improvement in their performance by a factor of > 15 compared with conventional ferrite pms therefore contributing positively to the ever-increasing demand for pms in many (including new) application fields, to the extent that RE-TM pms now account for nearly half of the worldwide market. This review article first gives a brief introduction to the basics of ferromagnetism to confer an insight into the variety of (permanent) magnets, their manufacture and application fields. We then examine the rather complex relationship between the microstructure and the magnetic properties for the two highest-performance and most promising pm materials mentioned. By using numerical micromagnetic simulations on the basis of the Finite Element technique the correlation can be quantitatively predicted, thus providing a powerful tool for the further development of optimized high-performance pms.

  2. High-performance permanent magnets

    NASA Astrophysics Data System (ADS)

    Goll, D.; Kronmüller, H.

    High-performance permanent magnets (pms) are based on compounds with outstanding intrinsic magnetic properties as well as on optimized microstructures and alloy compositions. The most powerful pm materials at present are RE-TM intermetallic alloys which derive their exceptional magnetic properties from the favourable combination of rare earth metals (RE=Nd, Pr, Sm) with transition metals (TM=Fe, Co), in particular magnets based on (Nd,Pr)2Fe14B and Sm2(Co,Cu,Fe,Zr)17. Their development during the last 20 years has involved a dramatic improvement in their performance by a factor of >15 compared with conventional ferrite pms therefore contributing positively to the ever-increasing demand for pms in many (including new) application fields, to the extent that RE-TM pms now account for nearly half of the worldwide market. This review article first gives a brief introduction to the basics of ferromagnetism to confer an insight into the variety of (permanent) magnets, their manufacture and application fields. We then examine the rather complex relationship between the microstructure and the magnetic properties for the two highest-performance and most promising pm materials mentioned. By using numerical micromagnetic simulations on the basis of the Finite Element technique the correlation can be quantitatively predicted, thus providing a powerful tool for the further development of optimized high-performance pms.

  3. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2008-01-01

    materials “from the atoms up” or to model biological systems at the molecular level. The speed and capacity of massively parallel computers are key...Streamlined, massively parallel high performance computing structural codes allow researchers to examine many relevant physical factors simultaneously...expenditure of energy, so that the drones can carry their load of sensors, communications devices, and fuel. AHPCRC researchers are using massively

  4. Determination of boron at sub-ppm levels in uranium oxide and aluminum by hyphenated system of complex formation reaction and high-performance liquid chromatography (HPLC).

    PubMed

    Rao, Radhika M; Aggarwal, Suresh K

    2008-04-15

    Boron, at sub-ppm levels, in U3O8 powder and aluminum metal, was determined using complex formation and dynamically modified reversed-phase high-performance liquid chromatography (RP-HPLC). Curcumin was used for complexing boron extracted with 2-ethyl-1,3-hexane diol (EHD). Separation of complex from excess reagent and thereafter its determination using the online diode array detector (DAD) was carried out by HPLC. Calibration curve was found to be linear for boron amounts in the sample ranging from 0.02 microg to 0.5 microg. Precision of about 10% was achieved for B determination in samples containing less than 1 ppmw of boron. The values obtained by HPLC were in good agreement with the data available from other analytical techniques. The precision in the data obtained by HPLC was much better compared to that reported by other techniques. The present hyphenated methodology of HPLC and complex formation reaction is interesting because of cost performance, simplicity, versatility and availability when compared to other spectroscopic techniques like ICP-MS and ICP-AES.

  5. Determination of propylthiouracil in pharmaceutical formulation by high-performance liquid-chromatography with a post-column iodine-azide reaction as a detection system.

    PubMed

    Zakrzewski, Robert

    2008-12-01

    A high-performance liquid chromatographic method with a post-column iodine-azide reaction has been chosen and tested for validity in quantitative determination of propylthiouracil in tablets. A mobile phase with a flow rate of 1.4 ml/min was conducted in the form of isocratic chromatography on a C18 column with acetonitrile-water-sodium azide solution (2.5%; pH 5.5) 24:26:50 (v/v/v). Unreacted iodine from post-column iodine-azide induced by reaction was monitored with visible detection at lambda=350 nm. The method proved both its linearity within the range of 8-100 nM (r2>0.9988) and satisfactory results of inter-day precision (RSD<4.2%) and accuracy (recovery>91%). The limits of detection (DDL) and quantification (DQL) reached the levels of 5 and 8 nM, respectively. The validation of the method comprised also its specificity. The results obtained proved the suitability and appropriateness of the suggested method for intended use.

  6. High Performance Perovskite Solar Cells.

    PubMed

    Tong, Xin; Lin, Feng; Wu, Jiang; Wang, Zhiming M

    2016-05-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long-term stable all-solid-state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost-effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole-transporting materials (HTMs) and electron-transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  7. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction. PMID:27774402

  8. Toward high performance graphene fibers.

    PubMed

    Chen, Li; He, Yuling; Chai, Songgang; Qiang, Hong; Chen, Feng; Fu, Qiang

    2013-07-07

    Two-dimensional graphene and graphene-based materials have attracted tremendous interest, hence much attention has been drawn to exploring and applying their exceptional characteristics and properties. Integration of graphene sheets into macroscopic fibers is a very important way for their application and has received increasing interest. In this study, neat and macroscopic graphene fibers were continuously spun from graphene oxide (GO) suspensions followed by chemical reduction. By varying wet-spinning conditions, a series of graphene fibers were prepared, then, the structural features, mechanical and electrical performances of the fibers were investigated. We found the orientation of graphene sheets, the interaction between inter-fiber graphene sheets and the defects in the fibers have a pronounced effect on the properties of the fibers. Graphene fibers with excellent mechanical and electrical properties will yield great advances in high-tech applications. These findings provide guidance for the future production of high performance graphene fibers.

  9. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  10. Quantitative determination of 13 organophosphorous flame retardants and plasticizers in a wastewater treatment system by high performance liquid chromatography tandem mass spectrometry.

    PubMed

    Woudneh, Million B; Benskin, Jonathan P; Wang, Guanghui; Grace, Richard; Hamilton, M Coreen; Cosgrove, John R

    2015-06-26

    A method for quantitative determination of 13 organophosphorous compounds (OPs) was developed and applied to influent, primary sludge, activated sludge, biosolids, primary effluent and final effluent from a wastewater treatment plant (WWTP). The method involved solvent extraction followed by solid phase clean-up and analysis by high performance liquid chromatography positive electrospray ionization-tandem mass spectrometry (HPLC(+ESI)MS/MS). Replicate spike/recovery experiments revealed the method to have good accuracy (70-132%) and precision (<19% RSD) in all matrices. Detection limits of 0.1-5 ng/L for aqueous samples and 0.01-0.5 ng/g for solid samples were achieved. In the liquid waste stream ∑OP concentrations were highest in influent (5764 ng/L) followed by primary effluent (4642 ng/L), and final effluent (2328 ng/L). In the solid waste stream, the highest ∑OP concentrations were observed in biosolids (3167 ng/g dw), followed by waste activated sludge (2294 ng/g dw), and primary sludge (2128 ng/g dw). These concentrations are nearly 30-fold higher than ∑polybrominated diphenyl ether (BDE) concentrations in influents and nearly 200-fold higher than ∑BDE concentrations in effluents from other sites in Canada. Tetrekis(2-chlorethyl)dichloroisopentyldiphosphate (V6), tripropylphosphate (TnPrP), and Tris(2,3-dibromopropyl)phosphate (TDBPP) are investigated for the first time in a WWTP. While TnPrP and TDBB were not detected, V6 was observed at concentrations up to 7.9 ng/g in solid waste streams and up to 40.7 ng/L in liquid waste streams. The lack of removal of OPs during wastewater treatment is a concern due to their release into the aquatic environment.

  11. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  12. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  13. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  14. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  15. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  16. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  17. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  18. Metallic Ca-Rh/C-methanol, a high-performing system for the hydrodechlorination/ring reduction of mono- and poly chlorinated aromatic substrates.

    PubMed

    Mitoma, Yoshiharu; Kakeda, Mitsunori; Simion, Alina Marieta; Egashira, Naoyoshi; Simion, Cristian

    2009-08-01

    We investigated the reduction of some substituted mono- and poly chlorobenzenes bearing functional groups such as methyl, methoxy, hydroxyl, and amino, under mild conditions (80 degrees C and magnetic stirring, for 2 h) using a system consisting of metallic calcium and methanol (as hydrogen donor system) and 5% wt. Rh/C (as hydrodechlorination/ring reduction catalyst). Hydrodechlorination easily took place for methoxy- and alkyl-chlorobenzenes, yielding the corresponding hydrodechlorinated compounds (57-76%) and affording as secondary reaction products the ring-reduced compounds (16-43%). Treatment of hydroxy- and amino-chlorobenzenes under the same conditions, respectively, gave corresponding hydrodechlorinated compounds (over 60%) along with the ring-reduced compounds. Results show that the reaction of substituted polychlorinated benzenes needs a longer reaction time (6 h), the transformation being nevertheless complete.

  19. A Novel Low-Power, High-Performance, Zero-Maintenance Closed-Path Trace Gas Eddy Covariance System with No Water Vapor Dilution or Spectroscopic Corrections

    NASA Astrophysics Data System (ADS)

    Sargent, S.; Somers, J. M.

    2015-12-01

    Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.

  20. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    NASA Astrophysics Data System (ADS)

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-02-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min‑1, while the TOF in the conventional batch reaction was 0.643 min‑1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes.

  1. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance.

    PubMed

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-02-23

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min(-1), while the TOF in the conventional batch reaction was 0.643 min(-1). The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes.

  2. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    PubMed Central

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-01-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min−1, while the TOF in the conventional batch reaction was 0.643 min−1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes. PMID:26902657

  3. Linear payout systems for dispensing fiber-optic data links from precision-guided munitions or high-performance aircraft/missles

    NASA Astrophysics Data System (ADS)

    Hoban, F.; Harms, D.

    1992-11-01

    Theoretical studies and empirical tests were conducted as part of the development and demonstration of a fiber-optic data link for weapon system guidance and control. These studies characterized the performance capability of several unique payout system concepts that permit the dispensing of a small diameter (170- to 250-micrometer) fiber-optic data link from military weapon systems at high subsonic velocities. Theoretical predictions were compared with laboratory payout test results. Several engineering models that allow linear dispensing of the fiber from a fiber canister or spool were developed and tested. Material, mechanical, and physical concepts evaluated included spiral flow of air through a payout nozzle, rotating nozzles with fixed geometries, fiber adhesives, and mechanical vibrations. Fiber real-time payout tension loads were measured for several fiber diameters and adhesives. Excellent results were obtained with a silicon-based adhesive with a low coefficient of friction and a 170-micrometer-diameter fiber. The prototype tests verified the results predicted by the theoretical string dynamical models.

  4. High-performance solar collector

    NASA Technical Reports Server (NTRS)

    Beekley, D. C.; Mather, G. R., Jr.

    1979-01-01

    Evacuated all-glass concentric tube collector using air or liquid transfer mediums is very efficient at high temperatures. Collector can directly drive existing heating systems that are presently driven by fossil fuel with relative ease of conversion and less expense than installation of complete solar heating systems.

  5. Comprehensive two-dimensional high performance liquid chromatography system with immobilized liposome chromatography column and monolithic column for separation of the traditional Chinese medicine Schisandra chinensis.

    PubMed

    Wang, Shuowen; Wang, Chen; Zhao, Xin; Mao, Shilong; Wu, Yutian; Fan, Guorong

    2012-02-03

    A comprehensive two-dimensional (2D) separation is one that employs two separation dimensions (columns) and draws on all of the available resolving power from each of the dimensions of separate the components in a sample. In this study, a comprehensive 2D chromatography approach was developed for the separation and identification of membrane permeable compounds in a famous traditional Chinese medicine of Schisandra chinensis. The first dimensional column was the immobilized liposome chromatography (ILC) column, which mimics the biological membranes and can be used to study drug-membrane interactions in liquid chromatography. Using an automatic ten-port switching valve equipped with two sample loops, the section of the first-dimension was introduced in the second-dimension consist of a silica monolithic column. More than 40 components in Schisandra chinensis were resolved by using the developed separation system and among them 14 compounds were identified interacting with the ILC column based on their retention action, UV and mass data. With this comprehensive 2D-HPLC system, the three-dimensional chromatographic fingerprints of Schisandra chinensis were preliminarily established and processed by using principal component analysis and hierarchical clustering analysis. The obtained information can distinguish the unacceptable samples of the quality control. The result demonstrated that the 2D biochromatography system has been demonstrated to have more advantages of finding strong binding bioactive components, providing an enhanced peak capacity, good sensitivity and powerful resolution biological fingerprinting analysis of complex TCMs, which was a useful means to control the quality of and to clarify the membrane permeability of the compounds in Schisandra chinensis.

  6. Indoor Air Quality in High Performance Schools

    EPA Pesticide Factsheets

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  7. Facilitating NASA's Use of GEIA-STD-0005-1, Performance Standard for Aerospace and High Performance Electronic Systems Containing Lead-Free Solder

    NASA Technical Reports Server (NTRS)

    Plante, Jeannete

    2010-01-01

    GEIA-STD-0005-1 defines the objectives of, and requirements for, documenting processes that assure customers and regulatory agencies that AHP electronic systems containing lead-free solder, piece parts, and boards will satisfy the applicable requirements for performance, reliability, airworthiness, safety, and certify-ability throughout the specified life of performance. It communicates requirements for a Lead-Free Control Plan (LFCP) to assist suppliers in the development of their own Plans. The Plan documents the Plan Owner's (supplier's) processes, that assure their customer, and all other stakeholders that the Plan owner's products will continue to meet their requirements. The presentation reviews quality assurance requirements traceability and LFCP template instructions.

  8. A radio-high-performance liquid chromatography dual-flow cell gamma-detection system for on-line radiochemical purity and labeling efficiency determination.

    PubMed

    Lindegren, S; Jensen, H; Jacobsson, L

    2014-04-11

    In this study, a method of determining radiochemical yield and radiochemical purity using radio-HPLC detection employing a dual-flow-cell system is evaluated. The dual-flow cell, consisting of a reference cell and an analytical cell, was constructed from two PEEK capillary coils to fit into the well of a NaI(Tl) detector. The radio-HPLC flow was directed from the injector to the reference cell allowing on-line detection of the total injected sample activity prior to entering the HPLC column. The radioactivity eluted from the column was then detected in the analytical cell. In this way, the sample will act as its own standard, a feature enabling on-line quantification of the processed radioactivity passing through the system. All data were acquired on-line via an analog signal from a rate meter using chromatographic software. The radiochemical yield and recovery could be simply and accurately determined by integration of the peak areas in the chromatogram obtained from the reference and analytical cells using an experimentally determined volume factor to correct for the effect of different cell volumes.

  9. NASA's Advanced Solar Sail Propulsion System for Low-Cost Deep Space Exploration and Science Missions that Use High Performance Rollable Composite Booms

    NASA Technical Reports Server (NTRS)

    Fernandez, Juan M.; Rose, Geoffrey K.; Younger, Casey J.; Dean, Gregory D.; Warren, Jerry E.; Stohlman, Olive R.; Wilkie, W. Keats

    2017-01-01

    Several low-cost solar sail technology demonstrator missions are under development in the United States. However, the mass saving derived benefits that composites can offer to such a mass critical spacecraft architecture have not been realized yet. This is due to the lack of suitable composite booms that can fit inside CubeSat platforms and ultimately be readily scalable to much larger sizes, where they can fully optimize their use. With this aim, a new effort focused at developing scalable rollable composite booms for solar sails and other deployable structures has begun. Seven meter booms used to deploy a 90 m2 class solar sail that can fit inside a 6U CubeSat have already been developed. The NASA road map to low-cost solar sail capability demonstration envisioned, consists of increasing the size of these composite booms to enable sailcrafts with a reflective area of up to 2000 m2 housed aboard small satellite platforms. This paper presents a solar sail system initially conceived to serve as a risk reduction alternative to Near Earth Asteroid (NEA) Scout's baseline design but that has recently been slightly redesigned and proposed for follow-on missions. The features of the booms and various deployment mechanisms for the booms and sail, as well as ground support equipment used during testing, are introduced. The results of structural analyses predict the performance of the system under microgravity conditions. Finally, the results of the functional and environmental testing campaign carried out are shown.

  10. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  11. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  12. High performance electrolytes for MCFC

    DOEpatents

    Kaun, T.D.; Roche, M.F.

    1999-08-24

    A carbonate electrolyte of the Li/Na or CaBaLiNa system is described. The Li/Na carbonate has a composition displaced from the eutectic composition to diminish segregation effects in a molten carbonate fuel cell. The CaBaLiNa system includes relatively small amounts of Ca{sub 2}CO{sub 3} and BaCO{sub 3}, and preferably of equimolar amounts. The presence of both Ca and BaCO{sub 3} enables lower temperature fuel cell operation. 15 figs.

  13. High performance electrolytes for MCFC

    DOEpatents

    Kaun, Thomas D.; Roche, Michael F.

    1999-01-01

    A carbonate electrolyte of the Li/Na or CaBaLiNa system. The Li/Na carbonate has a composition displaced from the eutectic composition to diminish segregation effects in a molten carbonate fuel cell. The CaBaLiNa system includes relatively small amounts of Ca.sub.2 CO.sub.3 and BaCO.sub.3, and preferably of equimolar amounts. The presence of both Ca and BaCO.sub.3 enables lower temperature fuel cell operation.

  14. A high performance thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Tijani, M. E. H.; Spoelstra, S.

    2011-11-01

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  15. High-performance, highly bendable MoS2 transistors with high-k dielectrics for flexible low-power systems.

    PubMed

    Chang, Hsiao-Yu; Yang, Shixuan; Lee, Jongho; Tao, Li; Hwang, Wan-Sik; Jena, Debdeep; Lu, Nanshu; Akinwande, Deji

    2013-06-25

    While there has been increasing studies of MoS2 and other two-dimensional (2D) semiconducting dichalcogenides on hard conventional substrates, experimental or analytical studies on flexible substrates has been very limited so far, even though these 2D crystals are understood to have greater prospects for flexible smart systems. In this article, we report detailed studies of MoS2 transistors on industrial plastic sheets. Transistor characteristics afford more than 100x improvement in the ON/OFF current ratio and 4x enhancement in mobility compared to previous flexible MoS2 devices. Mechanical studies reveal robust electronic properties down to a bending radius of 1 mm which is comparable to previous reports for flexible graphene transistors. Experimental investigation identifies that crack formation in the dielectric is the responsible failure mechanism demonstrating that the mechanical properties of the dielectric layer is critical for realizing flexible electronics that can accommodate high strain. Our uniaxial tensile tests have revealed that atomic-layer-deposited HfO2 and Al2O3 films have very similar crack onset strain. However, crack propagation is slower in HfO2 dielectric compared to Al2O3 dielectric, suggesting a subcritical fracture mechanism in the thin oxide films. Rigorous mechanics modeling provides guidance for achieving flexible MoS2 transistors that are reliable at sub-mm bending radius.

  16. EDITORIAL: High performance under pressure High performance under pressure

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  17. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. Nanocrystalline high performance permanent magnets

    NASA Astrophysics Data System (ADS)

    Gutfleisch, O.; Bollero, A.; Handstein, A.; Hinz, D.; Kirchner, A.; Yan, A.; Müller, K.-H.; Schultz, L.

    2002-04-01

    Recent developments in nanocrystalline rare earth-transition metal magnets are reviewed and emphasis is placed on research work at IFW Dresden. Principal synthesis methods include high energy ball milling, melt spinning and hydrogen assisted methods such as reactive milling and hydrogenation-disproportionation-desorption-recombination. These techniques are applied to NdFeB-, PrFeB- and SmCo-type systems with the aim to produce high remanence magnets with high coercivity. Concepts of maximizing the energy density in nanostructured magnets by either inducing a texture via anisotropic HDDR or hot deformation or enhancing the remanence via magnetic exchange coupling are evaluated.

  19. High-Performance Wireless Telemetry

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  20. High Performance Pulse Tube Cryocoolers

    NASA Astrophysics Data System (ADS)

    Olson, J. R.; Roth, E.; Champagne, P.; Evtimov, B.; Nast, T. C.

    2008-03-01

    Lockheed Martin's Advanced Technology Center has been developing pulse tube cryocoolers for more than ten years. Recent innovations include successful testing of four-stage coldheads, no-load temperature below 4 K, and the recent development of a high-efficiency compressor. This paper discusses the predicted performance of single and multiple stage pulse tube coldheads driven by our new 6 kg "M5Midi" compressor, which is capable of 90% efficiency with 200 W input power, and a maximum input power of 1000 W. This compressor retains the simplicity of earlier LM-ATC compressors: it has a moving magnet and an external electrical coil, minimizing organics in the working gas and requiring no electrical penetrations through the pressure wall. Motor losses were minimized during design, resulting in a simple, easily-manufactured compressor with state-of-the-art motor efficiency. The predicted cryocooler performance is presented as simple formulae, allowing an engineer to include the impact of a highly-optimized cryocooler into a full system analysis. Performance is given as a function of the heat rejection temperature and the cold tip temperatures and cooling loads.

  1. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  2. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  3. Use of dual priming oligonucleotide system-based multiplex RT-PCR combined with high performance liquid chromatography assay for simultaneous detection of five enteric viruses associated with acute enteritis.

    PubMed

    Fan, Wen-Lu; Wang, Zi-Wei; Qin, Yue; Sun, Chao; Liu, Zhong-Mei; Jiang, Yan-Ping; Qiao, Xin-Yuan; Tang, Li-Jie; Li, Yi-Jing; Xu, Yi-Gang

    2017-05-01

    In this study, a specific and sensitive method for simultaneous detection of human astrovirus, human rotavirus, norovirus, sapovirus and enteric adenovirus associated with acute enteritis was developed, based on the specific dual priming oligonucleotide (DPO) system and the sensitive high-performance liquid chromatography (HPLC) analysis. The DPO system-based multiplex reverse transcription-polymerase chain reaction (RT-PCR) combined with HPLC assay was more sensitive than agarose gel electrophoresis analysis and real-time SYBR Green PCR assay, and showed a specificity of 100% and sensitivity of 96%-100%. The high sensitivity and specificity of the assay indicates its great potential to be a useful tool for the accurate diagnosis of enteric virus infections.

  4. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  5. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  6. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  7. High Performance Work Practices and Firm Performance.

    ERIC Educational Resources Information Center

    Department of Labor, Washington, DC. Office of the American Workplace.

    A literature survey established that a substantial amount of research has been conducted on the relationship between productivity and the following specific high performance work practices: employee involvement in decision making, compensation linked to firm or worker performance, and training. According to these studies, high performance work…

  8. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  9. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  10. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  11. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  12. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  13. Energy Efficient Graphene Based High Performance Capacitors.

    PubMed

    Bae, Joonwon; Lee, Chang-Soo; Kwon, Oh Seok

    2016-10-27

    Graphene (GRP) is an interesting class of nano-structured electronic materials for various cutting-edge applications. To date, extensive research activities have been performed on the investigation of diverse properties of GRP. The incorporation of this elegant material can be very lucrative in terms of practical applications in energy storage/conversion systems. Among various those systems, high performance electrochemical capacitors (ECs) have become popular due to the recent need for energy efficient and portable devices. Therefore, in this article, the application of GRP for capacitors is described succinctly. In particular, a concise summary on the previous research activities regarding GRP based capacitors is also covered extensively. It was revealed that a lot of secondary materials such as polymers and metal oxides have been introduced to improve the performance. Also, diverse devices have been combined with capacitors for better use. More importantly, recent patents related to the preparation and application of GRP based capacitors are also introduced briefly. This article can provide essential information for future study.

  14. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  15. Rapidly Reconfigurable High Performance Computing Cluster

    DTIC Science & Technology

    2005-07-01

    1 SECTION 2 BACKGROUN D AN D OBJECTIVES ......................................................................... 2 2.1 H...igh Perform ance Com puting Trends ................................................................................ 2 2.2 Georgia Tech Activity in H PEC

  16. Architecture Analysis of High Performance Capacitors (POSTPRINT)

    DTIC Science & Technology

    2009-07-01

    includes the measurement of heat dissipated from a recently developed fluorenyl polyester (FPE) capacitor under an AC excitation. II. Capacitor ...AFRL-RZ-WP-TP-2010-2100 ARCHITECTURE ANALYSIS OF HIGH PERFORMANCE CAPACITORS (POSTPRINT) Hiroyuki Kosai and Tyler Bixel UES, Inc...2009 4. TITLE AND SUBTITLE ARCHITECTURE ANALYSIS OF HIGH PERFORMANCE CAPACITORS (POSTPRINT) 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c

  17. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  18. Design and performance of a new continuous-flow sample-introduction system for flame infrared-emission spectrometry: Applications in process analysis, flow injection analysis, and ion-exchange high-performance liquid chromatography.

    PubMed

    Lam, C K; Zhang, Y; Busch, M A; Busch, K W

    1993-06-01

    A new sample introduction system for the analysis of continuously flowing liquid streams by flame infrared-emission (FIRE) spectrometry has been developed. The system uses a specially designed purge cell to strip dissolved CO(2) from solution into a hydrogen gas stream that serves as the fuel for a hydrogen/air flame. Vibrationally excited CO(2) molecules present in the flame are monitored with a simple infrared filter (4.4 mum) photometer. The new system can be used to introduce analytes as a continuous liquid stream (process analysis mode) or on a discrete basis by sample injection (flow injection analysis mode). The key to the success of the method is the new purge-cell design. The small internal volume of the cell minimizes problems associated with purge-cell clean-out and produces sharp, reproducible signals. Spent analytical solution is continuously drained from the cell, making cell disconnection and cleaning between samples unnecessary. Under the conditions employed in this study, samples could be analyzed at a maximum rate of approximately 60/h. The new sample introduction system was successfully tested in both a process analysis- and a flow injection analysis mode for the determination of total inorganic carbon in Waco tap water. For the first time, flame infrared-emission spectrometry was successfully extended to non-volatile organic compounds by using chemical pretreatment with peroxydisulfate in the presence of silver ion to convert the analytes into dissolved carbon dioxide, prior to purging and detection by the FIRE radiometer. A test of the peroxydisulfate/Ag(+) reaction using six organic acids and five sugars indicated that all 11 compounds were oxidized to nearly the same extent. Finally, the new sample introduction system was used in conjunction with a simple filter FIRE radiometer as a detection system in ion-exchange high-performance liquid chromatography. Ion-exchange chromatograms are shown for two aqueous mixtures, one containing six organic

  19. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  20. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  1. High performance hand-held gas chromatograph

    SciTech Connect

    Yu, C.M.

    1998-04-28

    The Microtechnology Center of Lawrence Livermore National Laboratory has developed a high performance hand-held, real time detection gas chromatograph (HHGC) by Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this hand-held gas chromatograph is about five lbs., with a physical size of 8{close_quotes} x 5{close_quotes} x 3{close_quotes} including carrier gas and battery. It consumes about 12 watts of electrical power with a response time on the order of one to two minutes. This HHGC has an average effective theoretical plate of about 40k. Presently, its sensitivity is limited by its thermal sensitive detector at PPM. Like a conventional G.C., this HHGC consists mainly of three major components: (1) the sample injector, (2) the column, and (3) the detector with related electronics. The present HHGC injector is a modified version of the conventional injector. Its separation column is fabricated completely on silicon wafers by means of MEMS technology. This separation column has a circular cross section with a diameter of 100 pm. The detector developed for this hand-held GC is a thermal conductivity detector fabricated on a silicon nitride window by MEMS technology. A normal Wheatstone bridge is used. The signal is fed into a PC and displayed through LabView software.

  2. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  3. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  4. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  5. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  6. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  7. Characterization of covalent addition products of chlorogenic acid quinone with amino acid derivatives in model systems and apple juice by high-performance liquid chromatography/electrospray ionization tandem mass spectrometry.

    PubMed

    Schilling, Susanne; Sigolotto, Constance-Isabelle; Carle, Reinhold; Schieber, Andreas

    2008-01-01

    High-performance liquid chromatography (HPLC) coupled to electrospray ionization tandem mass spectrometry (ESI-MS(n)) was used to study the covalent interactions between chlorogenic acid (CQA) quinone and two amino acid derivatives, tert-butyloxycarbonyl-L-lysine and N-acetyl-L-cysteine. In a model system at pH 7.0, the formation of covalent addition products was demonstrated for both derivatives. The addition product of CQA dimer and tert-butyloxycarbonyl-L-lysine was characterized by LC/MS(n) as a benzacridine structure. For N-acetyl-L-cysteine, mono- and diaddition products at the thiol group with CQA quinone were found. In apple juice at pH 3.6, covalent interactions of CQA quinone were observed only with N-acetyl-L-cysteine. Taking together these results and those reported by other groups it can be concluded that covalent interactions of amino side chains with phenolic compounds could contribute to the reduction of the allergenic potential of certain food proteins.

  8. Validation and Application of an Ultra High-Performance Liquid Chromatography Tandem Mass Spectrometry Method for Yuanhuacine Determination in Rat Plasma after Pulmonary Administration: Pharmacokinetic Evaluation of a New Drug Delivery System.

    PubMed

    Li, Man; Liu, Xiao; Cai, Hao; Shen, Zhichun; Xu, Liu; Li, Weidong; Wu, Li; Duan, Jinao; Chen, Zhipeng

    2016-12-16

    Yuanhuacine was found to have significant inhibitory activity against A-549 human lung cancer cells. However, there would be serious adverse toxicity effects after systemic administration of yuanhuacine, such as by oral and intravenous ways. In order to achieve better curative effect and to alleviate the adverse toxicity effects, we tried to deliver yuanhuacine directly into the lungs. Ultra high-performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) was used to detect the analyte and IS. After extraction (ether:dichloromethane = 8:1), the analyte and IS were separated on a Waters BEH-C18 column (100 mm × 2.1 mm, 1.7 μm) under a 5 min gradient elution using a mixture of acetonitrile and 0.1% formic acid aqueous solution as mobile phase at a flow rate of 0.3 mL/min. ESI positive mode was chosen for detection. The method was fully validated for its selectivity, accuracy, precision, stability, matrix effect, and extraction recovery. This new method for yuanhuacine concentration determination in rat plasma was reliable and could be applied for its preclinical and clinical monitoring purpose.

  9. Determination of sulfonamides in swine muscle after salting-out assisted liquid extraction with acetonitrile coupled with back-extraction by a water/acetonitrile/dichloromethane ternary component system prior to high-performance liquid chromatography.

    PubMed

    Tsai, Wen-Hsien; Huang, Tzou-Chi; Chen, Ho-Hsien; Wu, Yuh-Wern; Huang, Joh-Jong; Chuang, Hung-Yi

    2010-01-15

    A salting-out assisted liquid extraction coupled with back-extraction by a water/acetonitrile/dichloromethane ternary component system combined with high-performance liquid chromatography with diode-array detection (HPLC-DAD) was developed for the extraction and determination of sulfonamides in solid tissue samples. After the homogenization of the swine muscle with acetonitrile and salt-promoted partitioning, an aliquot of 1 mL of the acetonitrile extract containing a small amount of dichloromethane (250-400 microL) was alkalinized with diethylamine. The clear organic extract obtained by centrifugation was used as a donor phase and then a small amount of water (40-55 microL) could be used as an acceptor phase to back-extract the analytes in the water/acetonitrile/dichloromethane ternary component system. In the back-extraction procedure, after mixing and centrifuging, the sedimented phase would be water and could be withdrawn easily into a microsyringe and directly injected into the HPLC system. Under the optimal conditions, recoveries were determined for swine muscle fortified at 10 ng/g and quantification was achieved by matrix-matched calibration. The calibration curves of five sulfonamides showed linearity with the coefficient of estimation above 0.998. Relative recoveries for the analytes were all from 96.5 to 109.2% with relative standard deviation of 2.7-4.0%. Preconcentration factors ranged from 16.8 to 30.6 for 1 mL of the acetonitrile extract. Limits of detection ranged from 0.2 to 1.0 ng/g.

  10. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  11. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  12. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  13. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  14. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  15. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  16. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  17. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  18. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  19. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  20. National Best Practices Manual for Building High Performance Schools

    ERIC Educational Resources Information Center

    US Department of Energy, 2007

    2007-01-01

    The U.S. Department of Energy's Rebuild America EnergySmart Schools program provides school boards, administrators, and design staff with guidance to help make informed decisions about energy and environmental issues important to school systems and communities. "The National Best Practices Manual for Building High Performance Schools" is a part of…

  1. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  2. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Diagnostic and remedial methods concerning rotordynamic instability problems in high performance turbomachinery are discussed. Instabilities due to seal forces and work-fluid forces are identified along with those induced by rotor bearing systems. Several methods of rotordynamic control are described including active feedback methods, the use of elastometric elements, and the use of hydrodynamic journal bearings and supports.

  3. The Process Guidelines for High-Performance Buildings

    SciTech Connect

    Grondzik, W.

    1999-07-01

    The Process Guidelines for High-Performance Buildings are a set of recommendations for the design and operation of efficient and effective commercial/institutional buildings. The Process Guidelines have been developed in a searchable database format and are intended to replace print documents that provide guidance for new building designs for the State of Florida and for the operation of existing State buildings. The Process Guidelines for High-Performance buildings reside on the World Wide Web and are publicly accessible. Contents may be accessed in a variety of ways to best suit the needs of the user. The Process Guidelines address the interests of a range of facilities professionals; are organized around the primary phases of building design, construction, and operation; and include content dealing with all major building systems. The Process Guidelines for High-Performance Buildings may be accessed through the ``Resources'' area of the edesign Web site: http://fcn.state.fl.us/fdi/edesign/resource/index.html.

  4. On-line two-dimensional countercurrent chromatography×high performance liquid chromatography system with a novel fragmentary dilution and turbulent mixing interface for preparation of coumarins from Cnidium monnieri.

    PubMed

    Wang, Dong; Chen, Long-Jiang; Liu, Jing-Lan; Wang, Xin-Yuan; Wu, Yun-Long; Fang, Mei-Juan; Wu, Zhen; Qiu, Ying-Kun

    2015-08-07

    This study describes a novel on-line two-dimensional countercurrent chromatography×high performance liquid chromatography (2D CCC×HPLC) system for one-step preparative isolation of coumarins from the fruits of Cnidium monnieri. An optimal biphasic solvent system composed of n-heptane/acetone/water (31:50:19, v/v) with suitable Kd values and a higher retention of the stationary phase was chosen to separate target compounds. In order to address the solvent incompatibility problem between CCC and RP-HPLC, a novel fragmentary dilution and turbulent mixing (FD-TM) interface was successfully developed. In detail, the eluent from the first dimensional CCC column was divided into fractions to form 'sample-dilution' stripes in the two switching sample loops, by the dilution water from the makeup pump. Following this, a long, thin tube was applied to mix the CCC eluent with water by in-tube turbulence, to reduce the solvent effect. Each CCC fraction was alternately trapped on the two holding columns for further preparative HPLC separation. This nationally designed FD-TM strategy effectively reduced post-column pressure and allowed a higher water dilution ratio at the post end of CCC, leading to improved sample recovery and a robust 2D CCC×HPLC isolation system. As a result, in a single 2D separation run (6.5h), eight target compounds (1-8) were isolated from 0.5g crude extract of C. monnieri, in overall yields of 1.3, 2.0, 0.5, 0.5, 0.8, 1.5, 8.2, and 15.0%, with HPLC purity of 90.1, 91.1, 94.7, 99.1, 99.2, 98.2, 97.9, and 91.9%, respectively. We anticipate that this improved 2D CCC×HPLC system, based on the novel FD-TM interface, has broad application for simultaneous isolation and purification of multiple components from other complex plant-derived natural products.

  5. Synchronized separation, concentration and determination of trace sulfadiazine and sulfamethazine in food and environment by using polyoxyethylene lauryl ether-salt aqueous two-phase system coupled to high-performance liquid chromatography.

    PubMed

    Lu, Yang; Cong, Biao; Tan, Zhenjiang; Yan, Yongsheng

    2016-11-01

    Polyoxyethylene lauryl ether (POELE10)-Na2C4H4O6 aqueous two-phase extraction system (ATPES) is a novel and green pretreatment technique to trace samples. ATPES coupled with high-performance liquid chromatography (HPLC) is used to analyze synchronously sulfadiazine (SDZ) and sulfamethazine (SMT) in animal by-products (i.e., egg and milk) and environmental water sample. It was found that the extraction efficiency (E%) and the enrichment factor (F) of SDZ and SMT were influenced by the types of salts, the concentration of salt, the concentration of POELE10 and the temperature. The orthogonal experimental design (OED) was adopted in the multi-factor experiment to determine the optimized conditions. The final optimal condition was as following: the concentration of POELE10 is 0.027gmL(-1), the concentration of Na2C4H4O6 is 0.180gmL(-1) and the temperature is 35°C. This POELE10-Na2C4H4O6 ATPS was applied to separate and enrich SDZ and SMT in real samples (i.e., water, egg and milk) under the optimal conditions, and it was found that the recovery of SDZ and SMT was 96.20-99.52% with RSD of 0.35-3.41%. The limit of detection (LOD) of this method for the SDZ and SMT in spiked samples was 2.52-3.64pgmL(-1), and the limit of quantitation (LOQ) of this method for the SDZ and SMT in spiked samples was 8.41-12.15pgmL(-1).

  6. Employment of High-Performance Thin-Layer Chromatography for the Quantification of Oleuropein in Olive Leaves and the Selection of a Suitable Solvent System for Its Isolation with Centrifugal Partition Chromatography.

    PubMed

    Boka, Vasiliki-Ioanna; Argyropoulou, Aikaterini; Gikas, Evangelos; Angelis, Apostolis; Aligiannis, Nektarios; Skaltsounis, Alexios-Leandros

    2015-11-01

    A high-performance thin-layer chromatographic methodology was developed and validated for the isolation and quantitative determination of oleuropein in two extracts of Olea europaea leaves. OLE_A was a crude acetone extract, while OLE_AA was its defatted residue. Initially, high-performance thin-layer chromatography was employed for the purification process of oleuropein with fast centrifugal partition chromatography, replacing high-performance liquid-chromatography, in the stage of the determination of the distribution coefficient and the retention volume. A densitometric method was developed for the determination of the distribution coefficients, KC = CS/CM. The total concentrations of the target compound in the stationary phase (CS) and in the mobile phase (CM) were calculated by the area measured in the high-performance thin-layer chromatogram. The estimated Kc was also used for the calculation of the retention volume, VR, with a chromatographic retention equation. The obtained data were successfully applied for the purification of oleuropein and the experimental results confirmed the theoretical predictions, indicating that high-performance thin-layer chromatography could be an important counterpart in the phytochemical study of natural products. The isolated oleuropein (purity > 95%) was subsequently used for the estimation of its content in each extract with a simple, sensitive and accurate high-performance thin-layer chromatography method. The best fit calibration curve from 1.0 µg/track to 6.0 µg/track of oleuropein was polynomial and the quantification was achieved by UV detection at λ 240 nm. The method was validated giving rise to an efficient and high-throughput procedure, with the relative standard deviation % of repeatability and intermediate precision not exceeding 4.9% and accuracy between 92% and 98% (recovery rates). Moreover, the method was validated for robustness, limit of quantitation, and limit of detection. The amount of oleuropein for

  7. AHPCRC - Army High Performance Computing Research Center

    DTIC Science & Technology

    2010-01-01

    treatments and reconstructive surgeries . High performance computer simu- lation allows designers to try out numerous mechanical and material...investigating the effect of techniques for simplifying the calculations (sending the projectile through a pre-existing hole, for example) on the accuracy of...semiconductor particles are size-dependent. These properties, including yield strength and resistance to fatigue, are not well predicted by macroscopic

  8. High-performance reactionless scan mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Ellen I.; Summers, Richard T.; Ostaszewski, Miroslaw A.

    1995-01-01

    A high-performance reactionless scan mirror mechanism was developed for space applications to provide thermal images of the Earth. The design incorporates a unique mechanical means of providing reactionless operation that also minimizes weight, mechanical resonance operation to minimize power, combined use of a single optical encoder to sense coarse and fine angular position, and a new kinematic mount of the mirror. A flex pivot hardware failure and current project status are discussed.

  9. High Performance Multiwall Carbon Nanotube Bolometers

    DTIC Science & Technology

    2010-10-21

    REPORT High performance multiwall carbon nanotube bolometers 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: High infrared bolometric photoresponse has...been observed in multiwall carbon nanotube MWCNT films at room temperature. The observed detectivity D in exceeding 3.3 106 cm Hz1/2 /W on MWCNT film...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS carbon nanotube, infrared detector, bolometer

  10. High Performance Split-Stirling Cooler Program

    DTIC Science & Technology

    1982-09-01

    7 SPLIT- STIRLING CYCLE CRYOCOOLER . ...... . . . . . 13 8 TEMPERATURE-SHOCK COMPARISON PERFORMANCE DATA, S/N 002 . . 23 9 TEMPERATURE-SHOCK...PERFORMANCE SPLIT- STIRLING "COOLER PROGRAM FINAL TECHNICAL REPORT "September 1982 Prepared for NIGHT VISION AND ELECTRO-OPTICS LABORATORI ES "Contract DAAK70...REPORT & P.Vt2OO COVERED HIGH PERFORMANCE SPLIT- STIRLING COOLER PROGRAM Final Technical Sept. 1979. - Sept. 1982 S. PERPORMING ORO. REPORT KUMMER

  11. Task parallelism and high-performance languages

    SciTech Connect

    Foster, I.

    1996-03-01

    The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism. The term task parallelism refers to the explicit creation of multiple threads of control, or tasks, which synchronize and communicate under programmer control. Task and data parallelism are complementary rather than competing programming models. While task parallelism is more general and can be used to implement algorithms that are not amenable to data-parallel solutions, many problems can benefit from a mixed approach, with for example a task-parallel coordination layer integrating multiple data-parallel computations. Other problems admit to both data- and task-parallel solutions, with the better solution depending on machine characteristics, compiler performance, or personal taste. For these reasons, we believe that a general-purpose high-performance language should integrate both task- and data-parallel constructs. The challenge is to do so in a way that provides the expressivity needed for applications, while preserving the flexibility and portability of a high-level language. In this paper, we examine and illustrate the considerations that motivate the use of task parallelism. We also describe one particular approach to task parallelism in Fortran, namely the Fortran M extensions. Finally, we contrast Fortran M with other proposed approaches and discuss the implications of this work for task parallelism and high-performance languages.

  12. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  13. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  14. High-performance network and channel based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1992-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called I/O channels. With the dramatic shift toward workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. In this paper, we discuss the underlying technology trends that are leading to high-performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high-performance computing based on network-attached storage.

  15. Micro-polarimeter for high performance liquid chromatography

    DOEpatents

    Yeung, Edward E.; Steenhoek, Larry E.; Woodruff, Steven D.; Kuo, Jeng-Chung

    1985-01-01

    A micro-polarimeter interfaced with a system for high performance liquid chromatography, for quantitatively analyzing micro and trace amounts of optically active organic molecules, particularly carbohydrates. A flow cell with a narrow bore is connected to a high performance liquid chromatography system. Thin, low birefringence cell windows cover opposite ends of the bore. A focused and polarized laser beam is directed along the longitudinal axis of the bore as an eluent containing the organic molecules is pumped through the cell. The beam is modulated by air gap Faraday rotators for phase sensitive detection to enhance the signal to noise ratio. An analyzer records the beams's direction of polarization after it passes through the cell. Calibration of the liquid chromatography system allows determination of the quantity of organic molecules present from a determination of the degree to which the polarized beam is rotated when it passes through the eluent.

  16. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  17. Greenlight high-performance system (HPS) 120-W laser vaporization versus transurethral resection of the prostate for the treatment of benign prostatic hyperplasia: a meta-analysis of the published results of randomized controlled trials.

    PubMed

    Zhou, Yan; Xue, Boxin; Mohammad, Nadeem Ahmed; Chen, Dong; Sun, Xiaofei; Yang, Jinhui; Dai, Guangcheng

    2016-04-01

    To assess the efficacy and the safety of Greenlight(TM) high-performance system (HPS) 120-W laser photoselective vaporization of the prostate (PVP) compared with transurethral resection of the prostate (TURP) for treatment of benign prostatic hyperplasia (BPH). The related original studies only including randomized controlled trials were searched by databases MEDLINE, EMBASE, Google Scholar, and the Cochrane Controlled Trial Register. The databases were updated till July 2014. The risk ratio, mean difference, and their corresponding 95% confidence intervals were calculated. Risk of bias of the enrolled trials were assessed according to Cochrane Handbook. A total of four trials involving 559 patients were enrolled. Statistical analysis was performed by software Review Manager (V5.3.3). There was no significant difference in International Prostate Symptom Score (IPSS) and maximum flow rate (Qmax) between PVP and TURP at 6-, 12-, and 24-month follow-up. Patients in the PVP group were associated with significantly lower risk of capsule perforation (risk ratio (RR) = 0.06, 95% confidence interval (95%CI) = 0.01 to 0.46; p = 0.007), significantly lower transfusion requirements (RR = 0.12, 95%CI = 0.03 to 0.43; p = 0.001), a shorter catheterization time (mean difference (MD) = -41.93, 95%CI = -54.87 to -28.99; p < 0.00001), and a shorter duration of hospital stay (MD = -2.09, 95%CI = -2.58 to -1.59; p < 0.00001) than that in the TURP group. In the TURP group, the patients were associated with a lower risk of re-operation (RR = 3.68, 95%CI = 1.04 to 13.00; p = 0.04) and a shorter operative time (MD = 9.28, 95%CI = 2.80 to 15.75; p = 0.005) than those in the PVP group. In addition, no statistically significant differences were detected between groups in terms of the rates of transurethral resection syndrome, urethral stricture, bladder neck contracture, incontinence, and infection. Greenlight(TM) 120-W

  18. Efficacy of a vaporization-resection of the prostate median lobe enlargement and vaporization of the prostate lateral lobe for benign prostatic hyperplasia using a 120-W GreenLight high-performance system laser: the effect on storage symptoms.

    PubMed

    Kim, Kang Sup; Choi, Sae Woong; Bae, Woong Jin; Kim, Su Jin; Cho, Hyuk Jin; Hong, Sung-Hoo; Lee, Ji Youl; Hwang, Tae-Kon; Kim, Sae Woong

    2015-05-01

    GreenLight laser photoselective vaporization of the prostate (PVP) was established as a minimally invasive procedure to treat patients with benign prostatic hyperplasia (BPH). However, it may be difficult to achieve adequate tissue removal from a large prostate, particularly those with an enlarged median lobe. The purpose of this study was to investigate the feasibility and clinical effect of a 120-W GreenLight high-performance system laser vaporization-resection for an enlarged prostate median lobe compared with those of only vaporization. A total of 126 patients from January 2010 to January 2014 had an enlarged prostate median lobe and were included in this study. Ninety-six patients underwent vaporization only (VP group), and 30 patients underwent vaporization-resection for an enlarged median lobe (VR group). The clinical outcomes were International Prostate Symptoms Score (IPSS), quality of life (QOL), maximum flow rate (Q max), and post-void residual urine volume (PVR) assessed at 1, 3, 6, and 12 months postoperatively between the two groups. The parameters were not significantly different preoperatively between the two groups, except for PVR. Operative time and laser time were shorter in the VR group than those in the VP group. (74.1 vs. 61.9 min and 46.7 vs. 37.8 min; P = 0.020 and 0.013, respectively) and used less energy (218.2 vs. 171.8 kJ, P = 0.025). Improved IPSS values, increased Q max, and a reduced PVR were seen in the two groups. In particular, improved storage IPSS values were higher at 1 and 3 months in the VR group than those in the VP group (P = 0.030 and 0.022, respectively). No significant complications were detected in either group. Median lobe tissue vaporization-resection was complete, and good voiding results were achieved. Although changes in urinary symptoms were similar between patients who received the two techniques, shorter operating time and lower energy were superior with the vaporization-resection technique. In

  19. Cray XMT Brings New Energy to High-Performance Computing

    SciTech Connect

    Chavarría-Miranda, Daniel; Gracio, Deborah K.; Marquez, Andres; Nieplocha, Jaroslaw; Scherrer, Chad; Sofia, Heidi J.

    2008-09-30

    The ability to solve our nation’s most challenging problems—whether it’s cleaning up the environment, finding alternative forms of energy or improving public health and safety—requires new scientific discoveries. High performance experimental and computational technologies from the past decade are helping to accelerate these scientific discoveries, but they introduce challenges of their own. The vastly increasing volumes and complexities of experimental and computational data pose significant challenges to traditional high-performance computing (HPC) platforms as terabytes to petabytes of data must be processed and analyzed. And the growing complexity of computer models that incorporate dynamic multiscale and multiphysics phenomena place enormous demands on high-performance computer architectures. Just as these new challenges are arising, the computer architecture world is experiencing a renaissance of innovation. The continuing march of Moore’s law has provided the opportunity to put more functionality on a chip, enabling the achievement of performance in new ways. Power limitations, however, will severely limit future growth in clock rates. The challenge will be to obtain greater utilization via some form of on-chip parallelism, but the complexities of emerging applications will require significant innovation in high-performance architectures. The Cray XMT, the successor to the Tera/Cray MTA, provides an alternative platform for addressing computations that stymie current HPC systems, holding the potential to substantially accelerate data analysis and predictive analytics for many complex challenges in energy, national security and fundamental science that traditional computing cannot do.

  20. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  1. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  2. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  3. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  4. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  5. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  6. High performance forward swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Koenig, David G. (Inventor); Aoyagi, Kiyoshi (Inventor); Dudley, Michael R. (Inventor); Schmidt, Susan B. (Inventor)

    1988-01-01

    A high performance aircraft capable of subsonic, transonic and supersonic speeds employs a forward swept wing planform and at least one first and second solution ejector located on the inboard section of the wing. A high degree of flow control on the inboard sections of the wing is achieved along with improved maneuverability and control of pitch, roll and yaw. Lift loss is delayed to higher angles of attack than in conventional aircraft. In one embodiment the ejectors may be advantageously positioned spanwise on the wing while the ductwork is kept to a minimum.

  7. High performance thyratron driver with low jitter.

    PubMed

    Verma, Rishi; Lee, P; Springham, S V; Tan, T L; Rawat, R S

    2007-08-01

    We report the design and development of insulated gate bipolar junction transistor based high performance driver for operating thyratrons in grounded grid mode. With careful design, the driver meets the specification of trigger output pulse rise time less than 30 ns, jitter less than +/-1 ns, and time delay less than 160 ns. It produces a -600 V pulse of 500 ns duration (full width at half maximum) at repetition rate ranging from 1 Hz to 1.14 kHz. The developed module also facilitates heating and biasing units along with protection circuitry in one complete package.

  8. High Performance Polymer Memory and Its Formation

    DTIC Science & Technology

    2007-04-26

    Std. Z39.18 Final Report to AFOSR High Performance Polymer Memory Device and Its Formation Fund No.: FA9550-04-1-0215 Prepared by Prof. Yang Yang...polystyrene (PS). The metal nanoparticles were prepared by the two-phase 10-5 (b) 10𔄁Polymer film 1a CC , 10, Glass 1 -2 -1 0 1 2 3 4 5 Bias (V) Fig. I...such as copper pthalocyanine (CuPc), 24 ൢ zinc pthalocyanine (ZnPc), 27󈧠 tetracene, 29 and pentacene 30 have been used as donors combined with

  9. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  10. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2006-05-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  11. Micromachined high-performance RF passives in CMOS substrate

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Ni, Zao; Gu, Lei; Wu, Zhengzheng; Yang, Chen

    2016-11-01

    This review systematically addresses the micromachining technologies used for the fabrication of high-performance radio-frequency (RF) passives that can be integrated into low-cost complementary metal-oxide semiconductor (CMOS)-grade (i.e. low-resistivity) silicon wafers. With the development of various kinds of post-CMOS-compatible microelectromechanical systems (MEMS) processes, 3D structural inductors/transformers, variable capacitors, tunable resonators and band-pass/low-pass filters can be compatibly integrated into active integrated circuits to form monolithic RF system-on-chips. By using MEMS processes, including substrate modifying/suspending and LIGA-like metal electroplating, both the highly lossy substrate effect and the resistive loss can be largely eliminated and depressed, thereby meeting the high-performance requirements of telecommunication applications.

  12. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  13. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  14. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  15. Stability and control of maneuvering high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Berry, P. W.

    1977-01-01

    The stability and control of a high-performance aircraft was analyzed, and a design methodology for a departure prevention stability augmentation system (DPSAS) was developed. A general linear aircraft model was derived which includes maneuvering flight effects and trim calculation procedures for investigating highly dynamic trajectories. The stability and control analysis systematically explored the effects of flight condition and angular motion, as well as the stability of typical air combat trajectories. The effects of configuration variation also were examined.

  16. Highlighting High Performance Buildings: National Renewable Energy Laboratory's Visitors Center

    SciTech Connect

    2001-06-01

    The National Renewable Energy Laboratory Visitors Center, also known as the Dan Schaefer Federal Building, is a high-performance building located in Golden, Colorado. The 6,400-square-foot building incorporates passive solar heating, energy-efficient lighting, an evaporative cooling system, and other technologies to minimize energy costs and environmental impact. The Visitors Center displays a variety of interactive exhibits on energy efficiency and renewable energy, and the building includes an auditorium, a public reading room, and office space.

  17. Optics of high-performance electron microscopes.

    PubMed

    Rose, H H

    2008-01-01

    During recent years, the theory of charged particle optics together with advances in fabrication tolerances and experimental techniques has lead to very significant advances in high-performance electron microscopes. Here, we will describe which theoretical tools, inventions and designs have driven this development. We cover the basic theory of higher-order electron optics and of image formation in electron microscopes. This leads to a description of different methods to correct aberrations by multipole fields and to a discussion of the most advanced design that take advantage of these techniques. The theory of electron mirrors is developed and it is shown how this can be used to correct aberrations and to design energy filters. Finally, different types of energy filters are described.

  18. A high performance architecture for prolog

    SciTech Connect

    Dobry, T.

    1987-01-01

    Artificial Intelligence is entering the mainstream of computer applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the processing power of conventional architecture. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an architecture specialized for Prolog can achieve a ten-fold improvement in performance over conventional general-purpose architecture, and presents such an architecture for high performance execution of Prolog programs. The architecture is based on the abstract machine description known as the Warren Abstract Machine (WAM). The execution model of the WAM is described and extended to provide a complete Instruction Set Architecture (ISA) for Prolog known as the PLM. The ISA is then realized in a microarchitecture and finally in a hardware design.

  19. High-performance architecture for Prolog

    SciTech Connect

    Dobry, T.P.

    1987-01-01

    Artificial intelligence is entering the mainstream of computer applications and, as techniques are developed and integrated into a wide variety of areas, they are beginning to tax the processing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an architecture specialized for Prolog can achieve a tenfold improvement in performance over conventional, general-purpose architectures. This dissertation presents such an architecture for high performance execution of Prolog programs. The architecture is based on the abstract machine description introduced by David H.D. Warren known as the Warren Abstract Machine (WAM). The execution model of the WAM is described and extended to provide a complete Instruction Set Architecture (ISA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design.

  20. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.