Science.gov

Sample records for high-performance microdialysis-based system

  1. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  2. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  3. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  4. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  5. System analysis of high performance MHD systems

    SciTech Connect

    Chang, S.L.; Berry, G.F.; Hu, N.

    1988-01-01

    This paper presents the results of an investigation on the upper ranges of performance that an MHD power plant using advanced technology assumptions might achieve and a parametric study on the key variables affecting this high performance. To simulate a high performance MHD power plant and conduct a parametric study, the Systems Analysis Language Translator (SALT) code developed at Argonne National Laboratory was used. The parametric study results indicate that the overall efficiency of an MHD power plant can be further increased subject to the improvement of some key variables such as, the MHD generator inverter efficiency, channel electrical loading factor, magnetic field strength, preheated air temperature, and combustor heat loss. In an optimization calculation, the simulated high performance MHD power plant using advanced technology assumptions can attain an ultra high overall efficiency, exceeding 62%. 12 refs., 5 figs., 4 tabs.

  6. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  7. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  8. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  9. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  10. High-Performance Energy Applications and Systems

    SciTech Connect

    Miller, Barton

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  11. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  12. Toward a new metric for ranking high performance computing systems.

    SciTech Connect

    Heroux, Michael Allen; Dongarra, Jack.

    2013-06-01

    The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

  13. Storage Area Networks and The High Performance Storage System

    SciTech Connect

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  14. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  15. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  16. Class of service in the high performance storage system

    SciTech Connect

    Louis, S.; Teaff, D.

    1995-01-10

    Quality of service capabilities are commonly deployed in archival mass storage systems as one or more client-specified parameters to influence physical location of data in multi-level device hierarchies for performance or cost reasons. The capabilities of new high-performance storage architectures and the needs of data-intensive applications require better quality of service models for modern storage systems. HPSS, a new distributed, high-performance, scalable, storage system, uses a Class of Service (COS) structure to influence system behavior. The authors summarize the design objectives and functionality of HPSS and describes how COS defines a set of performance, media, and residency attributes assigned to storage objects managed by HPSS servers. COS definitions are used to provide appropriate behavior and service levels as requested (or demanded) by storage system clients. They compare the HPSS COS approach with other quality of service concepts and discuss alignment possibilities.

  17. Materials integration issues for high performance fusion power systems.

    SciTech Connect

    Smith, D. L.

    1998-01-14

    One of the primary requirements for the development of fusion as an energy source is the qualification of materials for the frost wall/blanket system that will provide high performance and exhibit favorable safety and environmental features. Both economic competitiveness and the environmental attractiveness of fusion will be strongly influenced by the materials constraints. A key aspect is the development of a compatible combination of materials for the various functions of structure, tritium breeding, coolant, neutron multiplication and other special requirements for a specific system. This paper presents an overview of key materials integration issues for high performance fusion power systems. Issues such as: chemical compatibility of structure and coolant, hydrogen/tritium interactions with the plasma facing/structure/breeder materials, thermomechanical constraints associated with coolant/structure, thermal-hydraulic requirements, and safety/environmental considerations from a systems viewpoint are presented. The major materials interactions for leading blanket concepts are discussed.

  18. S-100-bus microcomputers aim at high-performance systems

    SciTech Connect

    Warren, C.

    1982-08-18

    The explosion in integrated low-cost desktop computer systems for single-user dedicated applications has a companion trend: a growing interest in high-performance multitasking multiuser systems and in system architecture which lend themselves easily to peripheral- and procesing-power expansion. The mature S-100 (IEEE-696) bus, once the domain of hobby computers, offers a variety of advantages to serve such applications; thus, systems designed around this bus are enjoying a revival. Microcomputers, from a number of manufacturers, which follow this trend are described.

  19. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  20. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  1. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  2. High performance distributed feedback fiber laser sensor array system

    NASA Astrophysics Data System (ADS)

    He, Jun; Li, Fang; Xu, Tuanwei; Wang, Yan; Liu, Yuliang

    2009-11-01

    Distributed feedback (DFB) fiber lasers have their unique properties useful for sensing applications. This paper presents a high performance distributed feedback (DFB) fiber laser sensor array system. Four key techniques have been adopted to set up the system, including DFB fiber laser design and fabrication, interferometric wavelength shift demodulation, digital phase generated carrier (PGC) technique and dense wavelength division multiplexing (DWDM). Experimental results confirm that a high dynamic strain resolution of 305 fɛ/√Hz (@ 1 kHz) has been achieved by the proposed sensor array system. And the multiplexing of eight channel DFB fiber laser sensor array has been demonstrated. The proposed DFB fiber laser sensor array system is suitable for ultra-weak signal detection, and has potential applications in the field of petroleum seismic explorations, earthquake prediction, and security.

  3. Software Systems for High-performance Quantum Computing

    SciTech Connect

    Humble, Travis S; Britt, Keith A

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  4. Development of a High Performance Acousto-ultrasonic Scan System

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  5. Sustaining high performance: dynamic balancing in an otherwise unbalanced system.

    PubMed

    Wolf, Jason A

    2011-01-01

    As Ovid said, "There is nothing in the whole world which is permanent." It is this very premise that frames the discoveries in this chapter and the compelling paradox it has raised. What began as a question of how performance is sustained, unveiled a collection of core organizational paradoxes. The findings ultimately suggest that sustained high performance is not a permanent state an organization achieves, but rather it is through perpetual movement and dynamic balance that sustainability occurs. The idea of sustainability as movement is predicated on the ability of organizational members to move beyond the experience of paradox as an impediment to progress. Through holding three critical "movements"--agile/consistency, collective/individualism, and informative/inquiry--not as paradoxical, but as active polarities, the organizations in the study were able to transcend paradox, and take active steps to continuous achievement in outperforming their peers. The study, focused on a collection of hospitals across the Unites States, reveals powerful stories of care and service, of the profound grace of human capacity, and of clear actions taken to create significant results. All of this was achieved in an environment of great volatility, in essence an unbalanced system. It was the discovery of movement and ultimately of dynamic balancing that allowed the organizations to in this study to move beyond stasis to the continuous "state" of sustaining high performance.

  6. Coal-fired high performance power generating system. Final report

    SciTech Connect

    1995-08-31

    As a result of the investigations carried out during Phase 1 of the Engineering Development of Coal-Fired High-Performance Power Generation Systems (Combustion 2000), the UTRC-led Combustion 2000 Team is recommending the development of an advanced high performance power generation system (HIPPS) whose high efficiency and minimal pollutant emissions will enable the US to use its abundant coal resources to satisfy current and future demand for electric power. The high efficiency of the power plant, which is the key to minimizing the environmental impact of coal, can only be achieved using a modern gas turbine system. Minimization of emissions can be achieved by combustor design, and advanced air pollution control devices. The commercial plant design described herein is a combined cycle using either a frame-type gas turbine or an intercooled aeroderivative with clean air as the working fluid. The air is heated by a coal-fired high temperature advanced furnace (HITAF). The best performance from the cycle is achieved by using a modern aeroderivative gas turbine, such as the intercooled FT4000. A simplified schematic is shown. In the UTRC HIPPS, the conversion efficiency for the heavy frame gas turbine version will be 47.4% (HHV) compared to the approximately 35% that is achieved in conventional coal-fired plants. This cycle is based on a gas turbine operating at turbine inlet temperatures approaching 2,500 F. Using an aeroderivative type gas turbine, efficiencies of over 49% could be realized in advanced cycle configuration (Humid Air Turbine, or HAT). Performance of these power plants is given in a table.

  7. System-Level Virtualization for High Performance Computing

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2008-01-01

    System-level virtualization has been a research topic since the 70's but regained popularity during the past few years because of the availability of efficient solution such as Xen and the implementation of hardware support in commodity processors (e.g. Intel-VT, AMD-V). However, a majority of system-level virtualization projects is guided by the server consolidation market. As a result, current virtualization solutions appear to not be suitable for high performance computing (HPC) which is typically based on large-scale systems. On another hand there is significant interest in exploiting virtual machines (VMs) within HPC for a number of other reasons. By virtualizing the machine, one is able to run a variety of operating systems and environments as needed by the applications. Virtualization allows users to isolate workloads, improving security and reliability. It is also possible to support non-native environments and/or legacy operating environments through virtualization. In addition, it is possible to balance work loads, use migration techniques to relocate applications from failing machines, and isolate fault systems for repair. This document presents the challenges for the implementation of a system-level virtualization solution for HPC. It also presents a brief survey of the different approaches and techniques to address these challenges.

  8. Using distributed OLTP technology in a high performance storage system

    SciTech Connect

    Tyler, T.W.; Fisher, D.S.

    1995-03-01

    The design of scaleable mass storage systems requires various system components to be distributed across multiple processors. Most of these processes maintain persistent database-type information (i.e., metadata) on the resources they are responsible for managing (e.g., bitfiles, bitfile segments, physical volumes, virtual volumes, cartridges, etc.). These processes all participate in fulfilling end-user requests and updating metadata information. A number of challenges arise when distributed processes attempt to maintain separate metadata resources with production-level integrity and consistency. For example, when requests fail, metadata changes made by the various processes must be aborted or rolled back. When requests are successful, all metadata changes must be committed together. If all metadata changes cannot be committed together for some reason, then all metadata changes must be rolled back to the previous consistent state. Lack of metadata consistency jeopardizes storage system integrity. Distributed on-line transaction processing (OLTP) technology can be applied to distributed mass storage systems as the mechanism for managing the consistency of distributed metadata. OLTP concepts are familiar to manN, industries such as banking and financial services but are less well known and understood in scientific and technical computing. As mass storage systems and other products are designed using distributed processing and data-management strategies for performance, scalability, and/or availability reasons, distributed OLTP technology can be applied to solve the inherent challenges raised by such environments. This paper discusses the benefits in using distributed transaction processing products. Design and implementation experiences using the Encina OLTP product from Transarc in the High Performance Storage System are presented in more detail as a case study for how this technology can be applied to mass storage systems designed for distributed environments.

  9. High-performance adhesive systems for polymer composite bonding applications

    NASA Astrophysics Data System (ADS)

    Klug, Jeremy Hager

    Adhesive films are utilized for polymeric composite bonding in numerous high-performance products including aerospace structures. These films must provide high bond strengths over the life-cycle of the part while not compromising the thermal or mechanical performance of the overall system. Currently, epoxy materials are most often employed in commercial adhesive films because of their versatility, cost, processing characteristics, and performance. However, there still exists a desire to improve these materials so that highly robust systems capable of optimized thermal, mechanical, and fracture resistance properties can be realized. In order to create these improved systems, a better understanding of the fundamental characteristics important in adhesion between dissimilar resin systems is needed. The goal of this research was to provide a means for obtaining this knowledge using an engineering approach. A methodology was developed by which model adhesive systems could be designed to explore processing-structure-property relationships. These model systems were designed to be characteristically similar and not chemically identical to commercial adhesive films. The methodology included a simulation engineering step to characterize the commercial product and develop the model system and a re-engineering step that occurs with the material manufacturer and customer to produce an improved product. The methodology was used to explore several issues for toughened epoxy adhesives including the adducting influence on performance, flexibilized liquid elastomer content importance, the relation between elastomer dispersed phase conversion and properties, the feasibility and performance of hybrid toughened resins, and the microcracking behavior of layered composite materials. Collectively, this research created a process that was applied to explore and identify important material parameters and provided information that can be used to design improved film adhesives.

  10. High Performance Drying System Using Absorption Temperature Amplifier

    NASA Astrophysics Data System (ADS)

    Nomura, Tomohiro; Nishimura, Nobuya; Yabushita, Akihiro; Kashiwagi, Takao

    It is highly essential to create a high performance drying technology from the viewpoint of energy conservation. Recently the drying process using superheated steam has received a great attention for improving the energy efficiency of the conventional air drying processes. Many other advantages of this superheated steam drying include its inert atmosphere, enhanced drying rate, improved product quality and easier control. This study presents a new concept of superheated steam drying in which the absorption temperature amplifier is effectively applied in order to recover the waste heat with high efficiency. A feature of this new drying system is that, owing to a closed circuit dryer, the consumption of heating energy decreases by approximately 50% of the conventional noncirculated one, and the superheated steam conventionally discharged so as to maintain the pressure of the dryer at an atmospheric one can be reused as heating energy for the generator of the absorption temperature amplifier. In the 1st report, thermal performances of this proposed system have been analyzed by a computer simulation developed for the solar-assisted absorption heat transformer model at the steady-state operating condition. It may be fair to conclude that this drying system satisfies the desired operating conditions, although it involves some problems to be solved further in detail in future.

  11. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  12. Sensor fusion methods for high performance active vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Collette, C.; Matichard, F.

    2015-04-01

    Sensor noise often limits the performance of active vibration isolation systems. Inertial sensors used in such systems can be selected through a wide variety of instrument noise and size characteristics. However, the most sensitive instruments are often the biggest and the heaviest. Consequently, high-performance active isolators sometimes embed many tens of kilograms in instrumentation. The weight and size of instrumentation can add unwanted constraint on the design. It tends to lower the structures natural frequencies and reduces the collocation between sensors and actuators. Both effects tend to reduce feedback control performance and stability. This paper discusses sensor fusion techniques that can be used in order to increase the control bandwidth (and/or the stability). For this, the low noise inertial instrument signal dominates the fusion at low frequency to provide vibration isolation. Other types of sensors (relative motion, smaller but noisier inertial, or force sensors) are used at higher frequencies to increase stability. Several sensor fusion configurations are studied. The paper shows the improvement that can be expected for several case studies including a rigid equipment, a flexible equipment, and a flexible equipment mounted on a flexible support structure.

  13. Coal-fired high performance power generating system

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO[sub x] SO [sub x] and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW[sub e] combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO[sub x] production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  14. SCEC Earthquake System Science Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  15. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  16. Low-Cost, High-Performance Hall Thruster Support System

    NASA Technical Reports Server (NTRS)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  17. High performance quarter-inch cartridge tape systems

    NASA Technical Reports Server (NTRS)

    Schwarz, Ted

    1993-01-01

    Within the established low cost structure of Data Cartridge drive technology, it is possible to achieve nearly 1 terrabyte (10(exp 12)) of data capacity and more than 1 Gbit/sec (greater than 100 Mbytes/sec) transfer rates. The desirability to place this capability within a single cartridge will be determined by the market. The 3.5 in. or smaller form factor may suffice to serve both the current Data Cartridge market and a high performance segment. In any case, Data Cartridge Technology provides a strong sustainable technology growth path in the 21st century.

  18. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems

    PubMed Central

    CHIU, MATT; HERBORDT, MARTIN C.

    2011-01-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA’s resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD. PMID:21660208

  19. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  20. Object-oriented high-performance particle systems

    NASA Astrophysics Data System (ADS)

    Belyaev, Sergey Y.; Plotnikov, Max

    2003-10-01

    Particle systems nowadays are the most popular visualization method for various special effects in 3D computer graphics. Software implementation of a particle system must have an abstract object-oriented model in order to be generic and portable. Besides, for real time graphics it is necessary that the particle system would remain efficient in processor time and memory. Original methods are described in this paper, which allow us to build such systems abstract and generic; as much as possible not depending on software environment and efficient at the same time.

  1. High Performance Image Processing And Laser Beam Recording System

    NASA Astrophysics Data System (ADS)

    Fanelli, Anthony R.

    1980-09-01

    The article is meant to provide the digital image recording community with an overview of digital image processing, and recording. The Digital Interactive Image Processing System (DIIPS) was assembled by ESL for Air Force Systems Command under ROME AIR DEVELOPMENT CENTER's guidance. The system provides the capability of mensuration and exploitation of digital imagery with both mono and stereo digital images as inputs. This development provided for system design, basic hardware, software and operational procedures to enable the Air Force's System Command photo analyst to perform digital mensuration and exploitation of stereo digital images as inputs. The engineering model was based on state-of-the-art technology and to the extent possible off-the-shelf hardware and software. A LASER RECORDER was also developed for the DIIPS Systems and is known as the Ultra High Resolution Image Recorder (UHRIR). The UHRIR is a prototype model that will enable the Air Force Systems Command to record computer enhanced digital image data on photographic film at high resolution with geometric and radiometric distortion minimized.

  2. A High Performance Virtualized Seismic Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Eakins, J. A.; Reyes, J. C.; Franke, M.; Sánchez, R. F.; Cortes Muñoz, P.; Busby, R. W.; Vernon, F.; Barrientos, S. E.

    2014-12-01

    As part of a collaborative effort with the Incorporated Research Institutions for Seismology, a virtualized seismic data acquisition and processing system was recently installed at the Centro Sismológical Nacional (CSN) at the Universidad de Chile for use as part of their early warning system. Using lessons learned from the Earthscope Transportable Array project, the design of this system consists of dedicated acquisition, processing and data distribution nodes hosted on a high availability hypervisor cluster. Data is exchanged with the IRIS Data Management Center and the existing processing infrastructure at the CSN. The processing nodes are backed by 20 TB of hybrid Solid State Disk (SSD) and spinning disk storage with automatic tiering of data between the disks. As part of the installation, best practices for station metadata maintenance were discussed and applied to the existing IRIS sponsored stations, as well as over 30 new stations being added to the early warning network. Four virtual machines (VM) were configured with distinctive tasks. Two VMs are dedicated to data acquisition, one to the real-time data processing, and one as relay between data acquisition and processing systems with services for the existing earthquake revision and dissemination infrastructure. The first acquisition system connects directly to Basalt dataloggers and Q330 digitizers, managing them, and acquiring seismic data as well as state-of-health (SOH) information. As newly deployed stations become available (beyond the existing 30), this VM is configured to acquire data from them and incorporate the additonal data. The second acquisition system imports the legacy network of CSN and data streams provided by other data centers. The processing system is connected to the production and archive databases. The relay system merges all incoming data streams and obtains the processing results. Data and processing packets are available for subsequent review and dissemination by the CSN. Such

  3. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  4. High Performance Drying System Using Absorption Temperature Amplifier

    NASA Astrophysics Data System (ADS)

    Nishimura, Nobuya; Nomura, Tomohiro; Yabushita, Akihiro; Kashiwagi, Takao

    A computer simulation has been developed on transient drying process in order to predict the dynamic thermal performance of a new superheated steam drying system using an absorption type temperature amplifier as a steam superheater. A feature of this drying system is that one can reuse the exhausted superheated stream conventionally discharged from the dryer as a driving heat source for the generator in this heat pump. But in the transient drying process, the evaporation of moisture sharply decreases. Accordingly, it is hardly expected to reuse an exhausted superheated steam as heating source for the generator. 80 the effects of this exhausted superheated steam and of changes in hot water and the cooling water temperatures were mainly investigated checking whether this drying system can be driven directly by the low level energy of sun or waste heat. Furthermore, the system performances of this drying system were evaluated on a qualitative-basis by using the exergy efficiency. The results show that, under the transient drying conditions, the temperature boost of superheated steam is possible at a high temperature and thus the absorption type temperature amplifier can be an effective steam superheater system.

  5. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  6. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  7. Toward high performance radioisotope thermophotovoltaic systems using spectral control

    NASA Astrophysics Data System (ADS)

    Wang, Xiawa; Chan, Walker; Stelmakh, Veronika; Celanovic, Ivan; Fisher, Peter

    2016-12-01

    This work describes RTPV-PhC-1, an initial prototype for a radioisotope thermophotovoltaic (RTPV) system using a two-dimensional photonic crystal emitter and low bandgap thermophotovoltaic (TPV) cell to realize spectral control. We validated a system simulation using the measurements of RTPV-PhC-1 and its comparison setup RTPV-FlatTa-1 with the same configuration except a polished tantalum emitter. The emitter of RTPV-PhC-1 powered by an electric heater providing energy equivalent to one plutonia fuel pellet reached 950 °C with 52 W of thermal input power and produced 208 mW output power from 1 cm2 TPV cell. We compared the system performance using a photonic crystal emitter to a polished flat tantalum emitter and found that spectral control with the photonic crystal was four times more efficient. Based on the simulation, with more cell areas, better TPV cells, and improved insulation design, the system powered by a fuel pellet equivalent heat source is expected to reach an efficiency of 7.8%.

  8. A High Performance Content Based Recommender System Using Hypernym Expansion

    SciTech Connect

    Potok, Thomas E; Patton, Robert M

    2015-10-20

    There are two major limitations in content-based recommender systems, the first is accurately measuring the similarity of preferred documents to a large set of general documents, and the second is over-specialization which limits the "interesting" documents recommended from a general document set. To address these issues, we propose combining linguistic methods and term frequency methods to improve overall performance and recommendation.

  9. Resolution of a High Performance Cavity Beam Position Monitor System

    SciTech Connect

    Walston, S; Chung, C; Fitsos, P; Gronberg, J; Ross, M; Khainovski, O; Kolomensky, Y; Loscutoff, P; Slater, M; Thomson, M; Ward, D; Boogert, S; Vogel, V; Meller, R; Lyapin, A; Malton, S; Miller, D; Frisch, J; Hinton, S; May, J; McCormick, D; Smith, S; Smith, T; White, G; Orimoto, T; Hayano, H; Honda, Y; Terunuma, N; Urakawa, J

    2005-09-12

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved - ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  10. Fitting modular reconnaissance systems into modern high-performance aircraft

    NASA Astrophysics Data System (ADS)

    Stroot, Jacquelyn R.; Pingel, Leslie L.

    1990-11-01

    The installation of the Advanced Tactical Air Reconnaissance System (ATARS) in the F/A-18D(RC) presented a complex set of design challenges. At the time of the F/A-18D(RC) ATARS option exercise, the design and development of the ATARS subsystems and the parameters of the F/A-18D(RC) were essentially fixed. ATARS is to be installed in the gun bay of the F/A-18D(RC), taking up no additional room, nor adding any more weight than what was removed. The F/A-18D(RC) installation solution required innovations in mounting, cooling, and fit techniques, which made constant trade study essential. The successful installation in the F/A-18D(RC) is the result of coupling fundamental design engineering with brainstorming and nonstandard approaches to every situation. ATARS is sponsored by the Aeronautical Systems Division, Wright-Patterson AFB, Ohio. The F/A-18D(RC) installation is being funded to the Air Force by the Naval Air Systems Command, Washington, D.C.

  11. High-performance space shuttle auxiliary propellant valve system

    NASA Technical Reports Server (NTRS)

    Smith, G. M.

    1973-01-01

    Several potential valve closures for the space shuttle auxiliary propulsion system (SS/APS) were investigated analytically and experimentally in a modeling program. The most promising of these were analyzed and experimentally evaluated in a full-size functional valve test fixture of novel design. The engineering investigations conducted for both model and scale evaluations of the SS/APS valve closures and functional valve fixture are described. Preliminary designs, laboratory tests, and overall valve test fixture designs are presented, and a final recommended flightweight SS/APS valve design is presented.

  12. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Mcewan, S. D.; Spry, A. J.

    1984-01-01

    An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives.

  13. A high performance pneumatic braking system for heavy vehicles

    NASA Astrophysics Data System (ADS)

    Miller, Jonathan I.; Cebon, David

    2010-12-01

    Current research into reducing actuator delays in pneumatic brake systems is opening the door for advanced anti-lock braking algorithms to be used on heavy goods vehicles. However, these algorithms require the knowledge of variables that are impractical to measure directly. This paper introduces a sliding mode braking force observer to support a sliding mode controller for air-braked heavy vehicles. The performance of the observer is examined through simulations and field testing of an articulated heavy vehicle. The observer operated robustly during single-wheel vehicle simulations, and provided reasonable estimates of surface friction from test data. The effect of brake gain errors on the controller and observer are illustrated, and a recursive least squares estimator is derived for the brake gain. The estimator converged within 0.3 s in simulations and vehicle trials.

  14. Performance analysis of memory hierachies in high performance systems

    SciTech Connect

    Yogesh, A.

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  15. Dynamic Thermal Management for High-Performance Storage Systems

    SciTech Connect

    Kim, Youngjae; Gurumurthi, Dr Sudhanva; Sivasubramaniam, Anand

    2012-01-01

    Thermal-aware design of disk drives is important because high temperatures can cause reliability problems. Dynamic Thermal Management (DTM) techniques have been proposed to operate the disk at the average case temperature, rather than at the worse case by modulating the activities to avoid thermal emergencies. The thermal emergencies can be caused by unexpected events, such as fan-breaks, increased inlet air temperature, etc. One of the DTM techniques is a delay-based approach that adjusts the disk seek activities, cooling down the disk drives. Even if such a DTM approach could overcome thermal emergencies without stopping disk activity, it suffers from long delays when servicing the requests. Thus, in this chapter, we investigate the possibility of using a multispeed disk-drive (called dynamic rotations per minute (DRPM)) that dynamically modulates the rotational speed of the platter for implementing the DTM technique. Using a detailed performance and thermal simulator of a storage system, we evaluate two possible DTM policies (- time-based and watermark-based) with a DRPM disk-drive and observe that dynamic RPM modulation is effective in avoiding thermal emergencies. However, we find that the time taken to transition between different rotational speeds of the disk is critical for the effectiveness of the DRPM based DTM techniques.

  16. Building High-Performing and Improving Education Systems. Systems and Structures: Powers, Duties and Funding. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    This Review looks at the way high-performing and improving education systems share out power and responsibility. Resources--in the form of funding, capital investment or payment of salaries and other ongoing costs--are some of the main levers used to make policy happen, but are not a substitute for well thought-through and appropriate policy…

  17. Research into the interaction between high performance and cognitive skills in an intelligent tutoring system

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.

    1991-01-01

    Two intelligent tutoring systems were developed. These tutoring systems are being used to study the effectiveness of intelligent tutoring systems in training high performance tasks and the interrelationship of high performance and cognitive tasks. The two tutoring systems, referred to as the Console Operations Tutors, were built using the same basic approach to the design of an intelligent tutoring system. This design approach allowed researchers to more rapidly implement the cognitively based tutor, the OMS Leak Detect Tutor, by using the foundation of code generated in the development of the high performance based tutor, the Manual Select Keyboard (MSK). It is believed that the approach can be further generalized to develop a generic intelligent tutoring system implementation tool.

  18. Evaluating the Clinical Accuracy of GlucoMen®Day: A Novel Microdialysis-based Continuous Glucose Monitor

    PubMed Central

    Valgimigli, Francesco; Lucarelli, Fausto; Scuffi, Cosimo; Morandi, Sara; Sposato, Iolanda

    2010-01-01

    Background The objective of this work was to determine the clinical accuracy of GlucoMen®Day, a new microdialysis-based continuous glucose monitoring system (CGMS) from A. Menarini Diagnostics (Florence, Italy). Accuracy evaluation was performed using continuous glucose-error grid analysis (CG-EGA), as recommended by the Performance Metrics for Continuous Interstitial Glucose Monitoring; Approved Guideline (POCT05-A). Methods Two independent clinical trials were carried out on patients with types 1 and 2 diabetes mellitus, the glycemic levels of whom were monitored in an in-home setting for 100-hour periods. A new multiparametric algorithm was developed and used to compensate in real-time the GlucoMen®Day signal. The time lag between continuous glucose monitoring (CGM) and reference data was first estimated using the Poincaré plot method. The entire set of CGM/reference data pairs was then evaluated following the CG-EGA criteria, which allowed an estimation of the combined point and rate accuracy stratified by glycemic ranges. Results With an estimated time lag of 11 minutes, the linear regression analysis of the CGM/reference glucose values yielded r = 0.92. The mean absolute error (MAE) was 11.4 mg/dl. The calculated mean absolute rate deviation (MARD) was 0.63 mg/dl/min. The data points falling within the A+B zones of CG-EGA were 100% in hypoglycemia, 95.7% in euglycemia, and 95.2% in hyperglycemia. Conclusions The GlucoMen®Day system provided reliable, real-time measurement of subcutaneous glucose levels in patients with diabetes for up to 100 hours. The device showed the ability to follow rapid glycemic excursions and detect severe hypoglycemic events accurately. Its accuracy parameters fitted the criteria of the state-of-the-art consensus guideline for CGMS, with highly consistent results from two independent studies. PMID:20920438

  19. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  20. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    ERIC Educational Resources Information Center

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  1. Unlocking the Black Box: Exploring the Link between High-Performance Work Systems and Performance

    ERIC Educational Resources Information Center

    Messersmith, Jake G.; Patel, Pankaj C.; Lepak, David P.

    2011-01-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level…

  2. High-Performance Work Systems and School Effectiveness: The Case of Malaysian Secondary Schools

    ERIC Educational Resources Information Center

    Maroufkhani, Parisa; Nourani, Mohammad; Bin Boerhannoeddin, Ali

    2015-01-01

    This study focuses on the impact of high-performance work systems on the outcomes of organizational effectiveness with the mediating roles of job satisfaction and organizational commitment. In light of the importance of human resource activities in achieving organizational effectiveness, we argue that higher employees' decision-making capabilities…

  3. High-performance control system for a heavy-ion medical accelerator

    SciTech Connect

    Lancaster, H.D.; Magyary, S.B.; Sah, R.C.

    1983-03-01

    A high performance control system is being designed as part of a heavy ion medical accelerator. The accelerator will be a synchrotron dedicated to clinical and other biomedical uses of heavy ions, and it will deliver fully stripped ions at energies up to 800 MeV/nucleon. A key element in the design of an accelerator which will operate in a hospital environment is to provide a high performance control system. This control system will provide accelerator modeling to facilitate changes in operating mode, provide automatic beam tuning to simplify accelerator operations, and provide diagnostics to enhance reliability. The control system being designed utilizes many microcomputers operating in parallel to collect and transmit data; complex numerical computations are performed by a powerful minicomputer. In order to provide the maximum operational flexibility, the Medical Accelerator control system will be capable of dealing with pulse-to-pulse changes in beam energy and ion species.

  4. Evolution of a high-performance storage system based on magnetic tape instrumentation recorders

    NASA Astrophysics Data System (ADS)

    Peters, Bruce

    In order to provide transparent access to data in network computing environments, high performance storage systems are getting smarter as well as faster. Magnetic tape instrumentation recorders contain an increasing amount of intelligence in the form of software and firmware that manages the processes of capturing input signals and data, putting them on media and then reproducing or playing them back. Such intelligence makes them better recorders, ideally suited for applications requiring the high-speed capture and playback of large streams of signals or data. In order to make recorders better storage systems, intelligence is also being added to provide appropriate computer and network interfaces along with services that enable them to interoperate with host computers or network client and server entities. Thus, recorders are evolving into high-performance storage systems that become an integral part of a shared information system. Data tape has embarked on a program with the Caltech sponsored Concurrent Supercomputer Consortium to develop a smart mass storage system. Working within the framework of the emerging IEEE Mass Storage System Reference Model, a high-performance storage system that works with the STX File Server to provide storage services for the Intel Touchstone Delta Supercomputer is being built. Our objective is to provide the required high storage capacity and transfer rate to support grand challenge applications, such as global climate modeling.

  5. Evolution of a high-performance storage system based on magnetic tape instrumentation recorders

    NASA Technical Reports Server (NTRS)

    Peters, Bruce

    1993-01-01

    In order to provide transparent access to data in network computing environments, high performance storage systems are getting smarter as well as faster. Magnetic tape instrumentation recorders contain an increasing amount of intelligence in the form of software and firmware that manages the processes of capturing input signals and data, putting them on media and then reproducing or playing them back. Such intelligence makes them better recorders, ideally suited for applications requiring the high-speed capture and playback of large streams of signals or data. In order to make recorders better storage systems, intelligence is also being added to provide appropriate computer and network interfaces along with services that enable them to interoperate with host computers or network client and server entities. Thus, recorders are evolving into high-performance storage systems that become an integral part of a shared information system. Data tape has embarked on a program with the Caltech sponsored Concurrent Supercomputer Consortium to develop a smart mass storage system. Working within the framework of the emerging IEEE Mass Storage System Reference Model, a high-performance storage system that works with the STX File Server to provide storage services for the Intel Touchstone Delta Supercomputer is being built. Our objective is to provide the required high storage capacity and transfer rate to support grand challenge applications, such as global climate modeling.

  6. High Performance Variable Speed Drive System and Generating System with Doubly Fed Machines

    NASA Astrophysics Data System (ADS)

    Tang, Yifan

    Doubly fed machines are another alternative for variable speed drive systems. The doubly fed machines, including doubly fed induction machine, self-cascaded induction machine and doubly excited brushless reluctance machine, have several attractive advantages for variable speed drive applications, the most important one being the significant cost reduction with a reduced power converter rating. With a better understanding, improved machine design, flexible power converters and innovated controllers, the doubly fed machines could favorably compete for many applications, which may also include variable speed power generations. The goal of this research is to enhance the attractiveness of the doubly fed machines for both variable speed drive and variable speed generator applications. Recognizing that wind power is one of the favorable clean, renewable energy sources that can contribute to the solution to the energy and environment dilemma, a novel variable-speed constant-frequency wind power generating system is proposed. By variable speed operation, energy capturing capability of the wind turbine is improved. The improvement can be further enhanced by effectively utilizing the doubly excited brushless reluctance machine in slip power recovery configuration. For the doubly fed machines, a stator flux two -axis dynamic model is established, based on which a flexible active and reactive power control strategy can be developed. High performance operation of the drive and generating systems is obtained through advanced control methods, including stator field orientation control, fuzzy logic control and adaptive fuzzy control. System studies are pursued through unified modeling, computer simulation, stability analysis and power flow analysis of the complete drive system or generating system with the machine, the converter and the control. Laboratory implementations and tested results with a digital signal processor system are also presented.

  7. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  8. Unlocking the black box: exploring the link between high-performance work systems and performance.

    PubMed

    Messersmith, Jake G; Patel, Pankaj C; Lepak, David P; Gould-Williams, Julian

    2011-11-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.

  9. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  10. A tutorial on the construction of high-performance resolution/paramodulation systems

    SciTech Connect

    Butler, R.; Overbeek, R.

    1990-09-01

    Over the past 25 years, researchers have written numerous deduction systems based on resolution and paramodulation. Of these systems, a very few have been capable of generating and maintaining a formula database'' containing more than just a few thousand clauses. These few systems were used to explore mechanisms for rapidly extracting limited subsets of relevant'' clauses. We have written this tutorial to reflect some of the best ideas that have emerged and to cast them in a form that makes them easily accessible to students wishing to write their own high-performance systems. 4 refs.

  11. Study on Walking Training System using High-Performance Shoes constructed with Rubber Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Y.; Kawanaka, S.; Kanezaki, K.; Doi, S.

    2016-09-01

    The number of accidental falls has been increasing among the elderly as society has aged. The main factor is a deteriorating center of balance due to declining physical performance. Another major factor is that the elderly tend to have bowlegged walking and their center of gravity position of the body tend to swing from side to side during walking. To find ways to counteract falls among the elderly, we developed walking training system to treat the gap in the center of balance. We also designed High-Performance Shoes that showed the status of a person's balance while walking. We also produced walk assistance from the insole in which insole stiffness corresponded to human sole distribution could be changed to correct the person's walking status. We constructed our High- Performances Shoes to detect pressure distribution during walking. Comparing normal sole distribution patterns and corrected ones, we confirmed that our assistance system helped change the user's posture, thereby reducing falls among the elderly.

  12. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application

  13. The NetLogger Methodology for High Performance Distributed Systems Performance Analysis

    SciTech Connect

    Tierney, Brian; Johnston, William; Crowley, Brian; Hoo, Gary; Brooks, Chris; Gunter, Dan

    1999-12-23

    The authors describe a methodology that enables the real-time diagnosis of performance problems in complex high-performance distributed systems. The methodology includes tools for generating precision event logs that can be used to provide detailed end-to-end application and system level monitoring; a Java agent-based system for managing the large amount of logging data; and tools for visualizing the log data and real-time state of the distributed system. The authors developed these tools for analyzing a high-performance distributed system centered around the transfer of large amounts of data at high speeds from a distributed storage server to a remote visualization client. However, this methodology should be generally applicable to any distributed system. This methodology, called NetLogger, has proven invaluable for diagnosing problems in networks and in distributed systems code. This approach is novel in that it combines network, host, and application-level monitoring, providing a complete view of the entire system.

  14. Damage-Mitigating Control of Space Propulsion Systems for High Performance and Extended Life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang

    1994-01-01

    A major goal in the control of complex mechanical system such as spacecraft rocket engine's advanced aircraft, and power plants is to achieve high performance with increased reliability, component durability, and maintainability. The current practice of decision and control systems synthesis focuses on improving performance and diagnostic capabilities under constraints that often do not adequately represent the materials degradation. In view of the high performance requirements of the system and availability of improved materials, the lack of appropriate knowledge about the properties of these materials will lead to either less than achievable performance due to overly conservative design, or over-straining of the structure leading to unexpected failures and drastic reduction of the service life. The key idea in this report is that a significant improvement in service life could be achieved by a small reduction in the system dynamic performance. The major task is to characterize the damage generation process, and then utilize this information in a mathematical form to synthesize a control law that would meet the system requirements and simultaneously satisfy the constraints that are imposed by the material and structural properties of the critical components. The concept of damage mitigation is introduced for control of mechanical systems to achieve high performance with a prolonged life span. A model of fatigue damage dynamics is formulated in the continuous-time setting, instead of a cycle-based representation, for direct application to control systems synthesis. An optimal control policy is then formulated via nonlinear programming under specified constraints of the damage rate and accumulated damage. The results of simulation experiments for the transient upthrust of a bipropellant rocket engine are presented to demonstrate efficacy of the damage-mitigating control concept.

  15. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  16. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    SciTech Connect

    Wang, Teng; Oral, H Sarp; Wang, Yandong; Settlemyer, Bradley W; Atchley, Scott; Yu, Weikuan

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  17. A High Performance Parachute System for the Recovery of Small Space Capsules

    NASA Astrophysics Data System (ADS)

    Koldaev, V.; Moraes, P., Jr.

    2002-01-01

    A non-guided high performance parachute system has been developed and tested for the recovery of orbital payloads or space capsules. The system is safe, efficient and affordable to be used for small size vehicles. It is based on a pilot, a drag and a cluster of main parachutes and an air bag to reduce the impact. The system has been designed to keep a stable descent with velocity up to 10 m/s, and prevent failures. To assure the achievement of all these characteristics, the determination of the parachute canopies areas, inflation and flight dynamics have been considered by application of numerical optimisation of the system parameters. Due to the mainly empirical nature of parachute design and development, wind tunnel and flight tests were conducted in order to achieve high reliability imposed by user requirements. The present article describes the system and discusses in detail the design features and testing of the parachutes.

  18. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  19. Coal-fired high performance power generating system. Quarterly progress report, January 1--March 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    This report covers work carried out under Task 2, Concept Definition and Analysis, and Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: > 47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FHTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The cycle optimization effort has brought about several revisions to the system configuration resulting from: (1) the use of Illinois No. 6 coal instead of Utah Blind Canyon; (2) the use of coal rather than methane as a reburn fuel; (3) reducing radiant section outlet temperatures to 1700F (down from 1800F); and (4) the need to use higher performance (higher cost) steam cycles to offset losses introduced as more realistic operating and construction constraints are identified.

  20. High-performance electronics for time-of-flight PET systems.

    PubMed

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively. PMID:24575149

  1. High-performance electronics for time-of-flight PET systems

    NASA Astrophysics Data System (ADS)

    Choong, W.-S.; Peng, Q.; Vu, C. Q.; Turko, B. T.; Moses, W. W.

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  2. High-performance electronics for time-of-flight PET systems.

    PubMed

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  3. PREFACE: XV Brazilian Symposium on High Performance Computational Systems (WSCAD 2014)

    NASA Astrophysics Data System (ADS)

    Melo, Alba; Fazenda, Alvaro; Stringhini, Denise

    2015-10-01

    We are very pleased to welcome you to this edition of the Journal of Physics: Conference Series. This is a special issue for WSCAD 2014 (XV Brazilian Symposium on High Performance Computational Systems) which comprises a set of seven papers very carefully selected from the best papers from the WSCAD conference in 2014. The authors of the selected papers submitted an extended version of their work and then all the papers went through a new review process. We are thankful to the authors for their contributions to this special issue.

  4. Simulation, Characterization, and Optimization of Metabolic Models with the High Performance Systems Biology Toolkit

    SciTech Connect

    Lunacek, M.; Nag, A.; Alber, D. M.; Gruchalla, K.; Chang, C. H.; Graf, P. A.

    2011-01-01

    The High Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization components for metabolic modeling and the means to assemble these components into large parallel processing hierarchies suiting a particular simulation and optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be configured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on {approx}1000 processors and accomplished billions of stiff ordinary differential equation simulations. This work opens the door for the systems biology metabolic modeling community to take effective advantage of large scale high performance computing resources for the first time.

  5. a High-Performance Method for Simulating Surface Rainfall-Runoff Dynamics Using Particle System

    NASA Astrophysics Data System (ADS)

    Zhang, Fangli; Zhou, Qiming; Li, Qingquan; Wu, Guofeng; Liu, Jun

    2016-06-01

    The simulation of rainfall-runoff process is essential for disaster emergency and sustainable development. One common disadvantage of the existing conceptual hydrological models is that they are highly dependent upon specific spatial-temporal contexts. Meanwhile, due to the inter-dependence of adjacent flow paths, it is still difficult for the RS or GIS supported distributed hydrological models to achieve high-performance application in real world applications. As an attempt to improve the performance efficiencies of those models, this study presents a high-performance rainfall-runoff simulating framework based on the flow path network and a separate particle system. The vector-based flow path lines are topologically linked to constrain the movements of independent rain drop particles. A separate particle system, representing surface runoff, is involved to model the precipitation process and simulate surface flow dynamics. The trajectory of each particle is constrained by the flow path network and can be tracked by concurrent processors in a parallel cluster system. The result of speedup experiment shows that the proposed framework can significantly improve the simulating performance just by adding independent processors. By separating the catchment elements and the accumulated water, this study provides an extensible solution for improving the existing distributed hydrological models. Further, a parallel modeling and simulating platform needs to be developed and validate to be applied in monitoring real world hydrologic processes.

  6. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  7. Towards Building High Performance Medical Image Management System for Clinical Trials

    PubMed Central

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-01-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTful Web Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems. PMID:21603096

  8. Coal-fired high performance power generating system. Quarterly progress report, April 1--June 30, 1993

    SciTech Connect

    Not Available

    1993-11-01

    This report covers work carried out under Task 2, Concept Definition and Analysis, Task 3, Preliminary R&D and Task 4, Commercial Generating Plant Design, under Contract AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: >47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le}25% NSPS; cost {ge}65% of heat input; all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. A survey of currently available high temperature alloys has been completed and some of their high temperature properties are shown for comparison. Several of the most promising candidates will be selected for testing to determine corrosion resistance and high temperature strength. The corrosion resistance testing of candidate refractory coatings is continuing and some of the recent results are presented. This effort will provide important design information that will ultimately establish the operating ranges of the HITAF.

  9. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  10. A new massively parallel version of CRYSTAL for large systems on high performance computing architectures.

    PubMed

    Orlando, Roberto; Delle Piane, Massimo; Bush, Ian J; Ugliengo, Piero; Ferrabone, Matteo; Dovesi, Roberto

    2012-10-30

    Fully ab initio treatment of complex solid systems needs computational software which is able to efficiently take advantage of the growing power of high performance computing (HPC) architectures. Recent improvements in CRYSTAL, a periodic ab initio code that uses a Gaussian basis set, allows treatment of very large unit cells for crystalline systems on HPC architectures with high parallel efficiency in terms of running time and memory requirements. The latter is a crucial point, due to the trend toward architectures relying on a very high number of cores with associated relatively low memory availability. An exhaustive performance analysis shows that density functional calculations, based on a hybrid functional, of low-symmetry systems containing up to 100,000 atomic orbitals and 8000 atoms are feasible on the most advanced HPC architectures available to European researchers today, using thousands of processors.

  11. A compilation system that integrates high performance Fortran and Fortran M

    SciTech Connect

    Foster, I.; Xu, Ming; Avalani, B.; Choudhary, A.

    1994-06-01

    Task parallelism and data parallelism are often seen as mutually exclusive approaches to parallel programming. Yet there are important classes of application, for example in multidisciplinary simulation and command and control, that would benefit from an integration of the two approaches. In this paper, we describe a programming system that we are developing to explore this sort of integration. This system builds on previous work on task-parallel and data-parallel Fortran compilers to provide an environment in which the task-parallel language Fortran M can be used to coordinate data-parallel High Performance Fortran tasks. We use an image-processing problem to illustrate the issues that arise when building an integrated compilation system of this sort.

  12. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  13. An Empirical Examination of the Mechanisms Mediating between High-Performance Work Systems and the Performance of Japanese Organizations

    ERIC Educational Resources Information Center

    Takeuchi, Riki; Lepak, David P.; Wang, Heli; Takeuchi, Kazuo

    2007-01-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human…

  14. Management of Virtual Large-scale High-performance Computing Systems

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Scott, Stephen L

    2011-01-01

    Linux is widely used on high-performance computing (HPC) systems, from commodity clusters to Cray su- percomputers (which run the Cray Linux Environment). These platforms primarily differ in their system config- uration: some only use SSH to access compute nodes, whereas others employ full resource management sys- tems (e.g., Torque and ALPS on Cray XT systems). Furthermore, latest improvements in system-level virtualization techniques, such as hardware support, virtual machine migration for system resilience purposes, and reduction of virtualization overheads, enables the usage of virtual machines on HPC platforms. Currently, tools for the management of virtual machines in the context of HPC systems are still quite basic, and often tightly coupled to the target platform. In this docu- ment, we present a new system tool for the management of virtual machines in the context of large-scale HPC systems, including a run-time system and the support for all major virtualization solutions. The proposed solution is based on two key aspects. First, Virtual System Envi- ronments (VSE), introduced in a previous study, provide a flexible method to define the software environment that will be used within virtual machines. Secondly, we propose a new system run-time for the management and deployment of VSEs on HPC systems, which supports a wide range of system configurations. For instance, this generic run-time can interact with resource managers such as Torque for the management of virtual machines. Finally, the proposed solution provides appropriate ab- stractions to enable use with a variety of virtualization solutions on different Linux HPC platforms, to include Xen, KVM and the HPC oriented Palacios.

  15. State observers and Kalman filtering for high performance vibration isolation systems.

    PubMed

    Beker, M G; Bertolini, A; van den Brand, J F J; Bulten, H J; Hennes, E; Rabeling, D S

    2014-03-01

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system. PMID:24689604

  16. State observers and Kalman filtering for high performance vibration isolation systems

    SciTech Connect

    Beker, M. G. Bertolini, A.; Hennes, E.; Rabeling, D. S.; Brand, J. F. J. van den; Bulten, H. J.

    2014-03-15

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  17. A survey on resource allocation in high performance distributed computing systems

    SciTech Connect

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  18. A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles

    NASA Astrophysics Data System (ADS)

    Zhai, Yiwen; Zhang, Hui; Zhang, Lingling; Dong, Shaojun

    2016-05-01

    A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to Prussian white nanoparticles leading to a decrease of the absorption spectrum, was chosen as the target. And we were able to determine the concentration of sulfite in aqueous solution with a low detection limit and a broad linear relationship.A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to

  19. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  20. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  1. Multisensory systems integration for high-performance motor control in flies.

    PubMed

    Frye, Mark A

    2010-06-01

    Engineered tracking systems 'fuse' data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control.

  2. Multisensory systems integration for high-performance motor control in flies.

    PubMed

    Frye, Mark A

    2010-06-01

    Engineered tracking systems 'fuse' data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control. PMID:20202821

  3. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    SciTech Connect

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  4. RAPID COMMUNICATION: Novel high performance small-scale thermoelectric power generation employing regenerative combustion systems

    NASA Astrophysics Data System (ADS)

    Weinberg, F. J.; Rowe, D. M.; Min, G.

    2002-07-01

    Hydrocarbon fuels have specific energy contents some two orders of magnitude greater than any electrical storage device. They therefore proffer an ideal source in the universal quest for compact, lightweight, long-lasting alternatives for batteries to power the ever-proliferating electronic devices. The motivation lies in the need to power, for example, equipment for infantry troops, for weather stations and buoys in polar regions which need to signal their readings intermittently to passing satellites, unattended over long periods, and many others. Fuel cells, converters based on miniaturized gas turbines, and other systems under intensive study, give rise to diverse practical difficulties. Thermoelectric devices are robust, durable and have no moving parts, but tend to be exceedingly inefficient. We propose regenerative combustion systems which mitigate this impediment and are likely to make high performance small-scale thermoelectric power generation applicable in practice. The efficiency of a thermoelectric generating system using preheat when operated between ambient and 1200 K is calculated to exceed the efficiency of the best present day thermoelectric conversion system by more than 20%.

  5. Towards a smart Holter system with high performance analogue front-end and enhanced digital processing.

    PubMed

    Du, Leilei; Yan, Yan; Wu, Wenxian; Mei, Qiujun; Luo, Yu; Li, Yang; Wang, Lei

    2013-01-01

    Multiple-lead dynamic ECG recorders (Holter) play an important role in the earlier detection of various cardiovascular diseases. In this paper, we present the first several steps towards a 12-lead Holter system with high-performance AFE (Analogue Front-End) and enhanced digital processing. The system incorporates an analogue front-end chip (ADS1298 from TI), which has not yet been widely used in most commercial Holter products. A highly-efficient data management module was designated to handle the data exchange between the ADS1298 and the microprocessor (STM32L151 from ST electronics). Furthermore, the system employs a Field Programmable Gate Array (Spartan-3E from Xilinx) module, on which a dedicated real-time 227-step FIR filter was executed to improve the overall filtering performance, since the ADS1298 has no high-pass filtering capability and only allows limited low-pass filtering. The Spartan-3E FPGA is also capable of offering further on-board computational ability for a smarter Holter. The results indicate that all functional blocks work as intended. In the future, we will conduct clinical trials and compare our system with other state-of-the-arts.

  6. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  7. An empirical examination of the mechanisms mediating between high-performance work systems and the performance of Japanese organizations.

    PubMed

    Takeuchi, Riki; Lepak, David P; Wang, Heli; Takeuchi, Kazuo

    2007-07-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human capital and encourage a high degree of social exchange within an organization, and that these are positively related to the organization's overall performance. On the basis of a sample of Japanese establishments, the results provide support for the existence of these mediating mechanisms through which high-performance work systems affect overall establishment performance.

  8. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  9. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  10. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  11. Microdialysis based monitoring of subcutaneous interstitial and venous blood glucose in Type 1 diabetic subjects by mid-infrared spectrometry for intensive insulin therapy

    NASA Astrophysics Data System (ADS)

    Heise, H. Michael; Kondepati, Venkata Radhakrishna; Damm, Uwe; Licht, Michael; Feichtner, Franz; Mader, Julia Katharina; Ellmerer, Martin

    2008-02-01

    Implementing strict glycemic control can reduce the risk of serious complications in both diabetic and critically ill patients. For this purpose, many different blood glucose monitoring techniques and insulin infusion strategies have been tested towards the realization of an artificial pancreas under closed loop control. In contrast to competing subcutaneously implanted electrochemical biosensors, microdialysis based systems for sampling body fluids from either the interstitial adipose tissue compartment or from venous blood have been developed, which allow an ex-vivo glucose monitoring by mid-infrared spectrometry. For the first option, a commercially available, subcutaneously inserted CMA 60 microdialysis catheter has been used routinely. The vascular body interface includes a double-lumen venous catheter in combination with whole blood dilution using a heparin solution. The diluted whole blood is transported to a flow-through dialysis cell, where the harvesting of analytes across the microdialysis membrane takes place at high recovery rates. The dialysate is continuously transported to the IR-sensor. Ex-vivo measurements were conducted on type-1 diabetic subjects lasting up to 28 hours. Experiments have shown excellent agreement between the sensor readout and the reference blood glucose concentration values. The simultaneous assessment of dialysis recovery rates renders a reliable quantification of whole blood concentrations of glucose and metabolites (urea, lactate etc) after taking blood dilution into account. Our results from transmission spectrometry indicate, that the developed bed-side device enables reliable long-term glucose monitoring with reagent- and calibration-free operation.

  12. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  13. Guidelines for application of fluorescent lamps in high-performance avionic backlight systems

    NASA Astrophysics Data System (ADS)

    Syroid, Daniel D.

    1997-07-01

    Fluorescent lamps have proven to be well suited for use in high performance avionic backlight systems as demonstrated by numerous production applications for both commercial and military cockpit displays. Cockpit display applications include: Boeing 777, new 737s, F-15, F-16, F-18, F-22, C- 130, Navy P3, NASA Space Shuttle and many others. Fluorescent lamp based backlights provide high luminance, high lumen efficiency, precision chromaticity and long life for avionic active matrix liquid crystal display applications. Lamps have been produced in many sizes and shapes. Lamp diameters range from 2.6 mm to over 20 mm and lengths for the larger diameter lamps range to over one meter. Highly convoluted serpentine lamp configurations are common as are both hot and cold cathode electrode designs. This paper will review fluorescent lamp operating principles, discuss typical requirements for avionic grade lamps, compare avionic and laptop backlight designs and provide guidelines for the proper application of lamps and performance choices that must be made to attain optimum system performance considering high luminance output, system efficiency, dimming range and cost.

  14. A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF

    NASA Astrophysics Data System (ADS)

    Deatrich, D. C.; Liu, S. X.; Tafirout, R.

    2010-04-01

    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.

  15. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  16. A Low-Cost, High-Performance System for Fluorescence Lateral Flow Assays

    PubMed Central

    Lee, Linda G.; Nordman, Eric S.; Johnson, Martin D.; Oldham, Mark F.

    2013-01-01

    We demonstrate a fluorescence lateral flow system that has excellent sensitivity and wide dynamic range. The illumination system utilizes an LED, plastic lenses and plastic and colored glass filters for the excitation and emission light. Images are collected on an iPhone 4. Several fluorescent dyes with long Stokes shifts were evaluated for their signal and nonspecific binding in lateral flow. A wide range of values for the ratio of signal to nonspecific binding was found, from 50 for R-phycoerythrin (R-PE) to 0.15 for Brilliant Violet 605. The long Stokes shift of R-PE allowed the use of inexpensive plastic filters rather than costly interference filters to block the LED light. Fluorescence detection with R-PE and absorbance detection with colloidal gold were directly compared in lateral flow using biotinylated bovine serum albumen (BSA) as the analyte. Fluorescence provided linear data over a range of 0.4–4,000 ng/mL with a 1,000-fold signal change while colloidal gold provided non-linear data over a range of 16–4,000 ng/mL with a 10-fold signal change. A comparison using human chorionic gonadotropin (hCG) as the analyte showed a similar advantage in the fluorescent system. We believe our inexpensive yet high-performance platform will be useful for providing quantitative and sensitive detection in a point-of-care setting. PMID:25586412

  17. High-performance surface acoustic wave immunosensing system on a PEG/aptamer hybridized surface.

    PubMed

    Horiguchi, Yukichi; Miyachi, Seigo; Nagasaki, Yukio

    2013-06-18

    Label-free immunoassay systems have the advantages of procedural simplicity and a low construction cost of surfaces for immunosensing. When label-free immunoassay systems are considered, the nonspecific adsorption of unwanted materials should be eliminated unless it aids in the detection of error. PEG is well-known as a blocking agent for the prevention of the adsorption of nonspecific binding materials when coimmobilized with ligands for targets such as antibodies and oligonucleotides. The construction strategy for PEG/ligand coimmobilized surfaces is an important point in the preparation of a high-performance assays because the physiological condition of the ligand depends strongly on its interaction with the PEG chain. In this report, we investigate the interaction between thrombin and a thrombin-binding aptamer (TBA) on a PEG/TBA coimmobilized surface by using a shear horizontal surface acoustic wave (SAW) sensor. The thrombin-TBA binding property shows remarkable differences with changes in the PEG density and the distance from the gold surface to the aptamer.

  18. A High Performance Pocket-Size System for Evaluations in Acoustic Signal Processing

    NASA Astrophysics Data System (ADS)

    Rass, Uwe; Steeger, Gerhard H.

    2001-12-01

    Custom-made hardware is attractive for sophisticated signal processing in wearable electroacoustic devices, but has a high initial cost overhead. Thus, signal processing algorithms should be tested thoroughly in real application environments by potential end users prior to the hardware implementation. In addition, the algorithms should be easily alterable during this test phase. A wearable system which meets these requirements has been developed and built. The system is based on the high performance signal processor Motorola DSP56309. This device also includes high quality stereo analog-to-digital-(ADC)- and digital-to-analog-(DAC)-converters with 20 bit word length each. The available dynamic range exceeds 88 dB. The input and output gains can be adjusted by digitally controlled potentiometers. The housing of the unit is small enough to carry it in a pocket (dimensions 150 × 80 × 25 mm). Software tools have been developed to ease the development of new algorithms. A set of configurable Assembler code modules implements all hardware dependent software routines and gives easy access to the peripherals and interfaces. A comfortable fitting interface allows easy control of the signal processing unit from a PC, even by assistant personnel. The device has proven to be a helpful means for development and field evaluations of advanced new hearing aid algorithms, within interdisciplinary research projects. Now it is offered to the scientific community.

  19. Detection of HEMA in self-etching adhesive systems with high performance liquid chromatography

    NASA Astrophysics Data System (ADS)

    Panduric, V.; Tarle, Z.; Hameršak, Z.; Stipetić, I.; Matosevic, D.; Negovetić-Mandić, V.; Prskalo, K.

    2009-04-01

    One of the factors that can decrease hydrolytic stability of self-etching adhesive systems (SEAS) is 2-hydroxymethylmethacrylate (HEMA). Due to hydrolytic instability of acidic methacrylate monomers in SEAS, HEMA can be present even if the manufacturer did not include it in original composition. The aim of the study was to determine the presence of HEMA because of decomposition by hydrolysis of methacrylates during storage, resulting with loss of adhesion strength to hard dental tissues of the tooth crown. Three most commonly used SEAS were tested: AdheSE ONE, G-Bond and iBond under different storage conditions. High performance liquid chromatography analysis was performed on a Nucleosil C 18-100 5 μm (250 × 4.6 mm) column, Knauer K-501 pumps and Wellchrom DAD K-2700 detector at 215 nm. Data were collected and processed by EuroCrom 2000 HPLC software. Calibration curves were made related eluted peak area to known concentrations of HEMA (purchased from Fluka). The elution time for HEMA is 12.25 min at flow rate 1.0 ml/min. Obtained results indicate that no HEMA was present in AdheSE ONE because methacrylates are substituted with methacrylamides that seem to be more stable under acidic aqueous conditions. In all other adhesive systems HEMA was detected.

  20. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  1. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-04-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input, all solid wastes benign, and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  2. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1998-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard), coal providing {ge} 65% of heat input, all solid wastes benign cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAF Combustor; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  3. Design of a VLSI scan conversion processor for high-performance 3-D graphics systems

    SciTech Connect

    Huang, H.U.

    1988-01-01

    Scan-conversion processing is the bottleneck in the image generation process. To solve the problem of smooth shading and hidden surface elimination, a new processor architecture was invented which has been labeled as a scan-conversion processor architecture (SCP). The SCP is designed to perform hidden surface elimination and scan conversion for 64 pixels. The color intensities are dual-buffered so that when one buffer is being updated the other can be scanned out. Z-depth is used to perform the hidden surface elimination. The key operation performed by the SCP is the evaluation of linear functions of a form like F(X,Y) = A X + B Y + C. The computation is further simplified by using incremental addition. The z-depth buffer and the color buffers are incorporated onto the same chip. The SCP receives from its preprocessor the information for the definition of polygons and the computation of z-depth and RGB color intensities. Many copies of this processor will be used in a high-performance graphics system.

  4. High-performance CMOS image sensors at BAE SYSTEMS Imaging Solutions

    NASA Astrophysics Data System (ADS)

    Vu, Paul; Fowler, Boyd; Liu, Chiao; Mims, Steve; Balicki, Janusz; Bartkovjak, Peter; Do, Hung; Li, Wang

    2012-07-01

    In this paper, we present an overview of high-performance CMOS image sensor products developed at BAE SYSTEMS Imaging Solutions designed to satisfy the increasingly challenging technical requirements for image sensors used in advanced scientific, industrial, and low light imaging applications. We discuss the design and present the test results of a family of image sensors tailored for high imaging performance and capable of delivering sub-electron readout noise, high dynamic range, low power, high frame rates, and high sensitivity. We briefly review the performance of the CIS2051, a 5.5-Mpixel image sensor, which represents our first commercial CMOS image sensor product that demonstrates the potential of our technology, then we present the performance characteristics of the CIS1021, a full HD format CMOS image sensor capable of delivering sub-electron read noise performance at 50 fps frame rate at full HD resolution. We also review the performance of the CIS1042, a 4-Mpixel image sensor which offers better than 70% QE @ 600nm combined with better than 91dB intra scene dynamic range and about 1 e- read noise at 100 fps frame rate at full resolution.

  5. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-01-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input; all solid wastes benign; cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  6. Analysis of starch in food systems by high-performance size exclusion chromatography.

    PubMed

    Ovando-Martínez, Maribel; Whitney, Kristin; Simsek, Senay

    2013-02-01

    Starch has unique physicochemical characteristics among food carbohydrates. Starch contributes to the physicochemical attributes of food products made from roots, legumes, cereals, and fruits. It occurs naturally as distinct particles, called granules. Most starch granules are a mixture of 2 sugar polymers: a highly branched polysaccharide named amylopectin and a basically linear polysaccharide named amylose. The starch contained in food products undergoes changes during processing, which causes changes in the starch molecular weight and amylose to amylopectin ratio. The objective of this study was to develop a new, simple, 1-step, and accurate method for simultaneous determination of amylose and amylopectin ratio as well as weight-averaged molecular weights of starch in food products. Starch from bread flour, canned peas, corn flake cereal, snack crackers, canned kidney beans, pasta, potato chips, and white bread was extracted by dissolving in KOH, urea, and precipitation with ethanol. Starch samples were solubilized and analyzed on a high-performance size exclusion chromatography (HPSEC) system. To verify the identity of the peaks, fractions were collected and soluble starch and beta-glucan assays were performed additional to gas chromatography analysis. We found that all the fractions contain only glucose and soluble starch assay is correlated to the HPSEC fractionation. This new method can be used to determine amylose amylopectin ratio and weight-averaged molecular weight of starch from various food products using as low as 25 mg dry samples. PMID:23330715

  7. HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    SciTech Connect

    Kim, Youngjae; Gupta, Aayush; Urgaonkar, Bhuvan; Piotr, Berman; Sivasubramaniam, Anand

    2011-01-01

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based solid-state drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic hard disk drives (HDDs), and can sometimes be as slow as the HDD (due to excessive garbage collection (GC) induced by high intensity of random writes). Given these trade-offs between HDDs and SSDs in terms of cost, performance, and lifetime, the current consensus among several storage experts is to view SSDs not as a replacement for HDD but rather as a complementary device within the high-performance storage hierarchy. We design and evaluate such a hybrid system called HybridStore to provide: (a) HybridPlan: improved capacity planning technique to administrators with the overall goal of operating within cost-budgets and (b) HybridDyn: improved performance/lifetime guarantees during episodes of deviations from expected workloads through two novel mechanisms: write-regulation and fragmentation busting. As an illustrative example of HybridStore s ef cacy, HybridPlan is able to nd the most cost-effective storage con guration for a large scale workload of Microsoft Research and suggest one MLC SSD with ten 7.2K RPM HDDs instead of fourteen 7.2K RPM HDDs only. HybridDyn is able to reduce the average response time for an enterprise scale random-write dominant workload by about 71% as compared to a HDD-based system.

  8. Advanced Insulation for High Performance Cost-Effective Wall, Roof, and Foundation Systems Final Report

    SciTech Connect

    Costeux, Stephane; Bunker, Shanon

    2013-12-20

    The objective of this project was to explore and potentially develop high performing insulation with increased R/inch and low impact on climate change that would help design highly insulating building envelope systems with more durable performance and lower overall system cost than envelopes with equivalent performance made with materials available today. The proposed technical approach relied on insulation foams with nanoscale pores (about 100 nm in size) in which heat transfer will be decreased. Through the development of new foaming methods, of new polymer formulations and new analytical techniques, and by advancing the understanding of how cells nucleate, expand and stabilize at the nanoscale, Dow successfully invented and developed methods to produce foams with 100 nm cells and 80% porosity by batch foaming at the laboratory scale. Measurements of the gas conductivity on small nanofoam specimen confirmed quantitatively the benefit of nanoscale cells (Knudsen effect) to increase insulation value, which was the key technical hypotheses of the program. In order to bring this technology closer to a viable semi-continuous/continuous process, the project team modified an existing continuous extrusion foaming process as well as designed and built a custom system to produce 6" x 6" foam panels. Dow demonstrated for the first time that nanofoams can be produced in a both processes. However, due to technical delays, foam characteristics achieved so far fall short of the 100 nm target set for optimal insulation foams. In parallel with the technology development, effort was directed to the determination of most promising applications for nanocellular insulation foam. Voice of Customer (VOC) exercise confirmed that demand for high-R value product will rise due to building code increased requirements in the near future, but that acceptance for novel products by building industry may be slow. Partnerships with green builders, initial launches in smaller markets (e.g. EIFS

  9. Engineering development of coal-fired high-performance power systems

    SciTech Connect

    1999-05-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, AL. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. The char combustion tests in the arch-fired arrangement were completed this quarter. A total of twenty-one setpoints were successfully completed, firing both synthetically-made char

  10. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH-PERFORMANCE POWER SYSTEMS

    SciTech Connect

    Unknown

    1999-02-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, AL. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. A general arrangement drawing of the char transfer system was forwarded to SCS for their review. Structural steel drawings were used to generate a three-dimensional model of the char

  11. Design of a high-performance telepresence system incorporating an active vision system for enhanced visual perception of remote environments

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Asbery, Richard

    1995-12-01

    This paper describes the design, development and implementation of a telepresence system for hazardous environment applications. Its primary feature is a high performance active stereo vision system slaved to the motion of the operators head. To simulate the presence of an operator in a remote, hazardous environment, it is necessary to provide sufficient visual information about the remote environment. The operator must be able to interact with the environment so that he can carry out manipulative tasks. To achieve an enhanced sense of visual perception we have developed a tightly integrated pan and tilt stereo vision system with a head-mounted display. The motion of the operators head is monitored by a six DOF sensor which provides the demand signals to servocontrol the active vision system. The system we have developed is a compact yet high performance system employing mechatronic principles to deliver a system that can be mounted on a small mobile platform. We have also developed an open architecture controller to implement the dynamic, active vision system which exhibits dynamic performance characteristics of the human head-eye system so as to form a natural and intuitive interface. A series of tests have been conducted to establish the system latency and to explore the effectiveness of remote 3D human perception, particularly with regard to manipulation tasks and navigation. The results of these tests are presented.

  12. Coal-fired high performance power generating system. Quarterly progress report

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO{sub x} SO {sub x} and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R&D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO{sub x} production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  13. High performance dash on warning air mobile, missile system. [intercontinental ballistic missiles - systems analysis

    NASA Technical Reports Server (NTRS)

    Levin, A. D.; Castellano, C. R.; Hague, D. S.

    1975-01-01

    An aircraft-missile system which performs a high acceleration takeoff followed by a supersonic dash to a 'safe' distance from the launch site is presented. Topics considered are: (1) technological feasibility to the dash on warning concept; (2) aircraft and boost trajectory requirements; and (3) partial cost estimates for a fleet of aircraft which provide 200 missiles on airborne alert. Various aircraft boost propulsion systems were studied such as an unstaged cryogenic rocket, an unstaged storable liquid, and a solid rocket staged system. Various wing planforms were also studied. Vehicle gross weights are given. The results indicate that the dash on warning concept will meet expected performance criteria, and can be implemented using existing technology, such as all-aluminum aircraft and existing high-bypass-ratio turbofan engines.

  14. System and method for on demand, vanishing, high performance electronic systems

    DOEpatents

    Shah, Kedar G.; Pannu, Satinderpall S.

    2016-03-22

    An integrated circuit system having an integrated circuit (IC) component which is able to have its functionality destroyed upon receiving a command signal. The system may involve a substrate with the IC component being supported on the substrate. A module may be disposed in proximity to the IC component. The module may have a cavity and a dissolving compound in a solid form disposed in the cavity. A heater component may be configured to heat the dissolving compound to a point of sublimation where the dissolving compound changes from a solid to a gaseous dissolving compound. A triggering mechanism may be used for initiating a dissolution process whereby the gaseous dissolving compound is allowed to attack the IC component and destroy a functionality of the IC component.

  15. Instructional Leadership in Centralised Systems: Evidence from Greek High-Performing Secondary Schools

    ERIC Educational Resources Information Center

    Kaparou, Maria; Bush, Tony

    2015-01-01

    This paper examines the enactment of instructional leadership (IL) in high-performing secondary schools (HPSS), and the relationship between leadership and learning in raising student outcomes and encouraging teachers' professional learning in the highly centralised context of Greece. It reports part of a comparative research study focused on…

  16. A simple method for evaluating image quality of screen-film system using a high-performance digital camera

    NASA Astrophysics Data System (ADS)

    Fujita, Naotoshi; Yamazaki, Asumi; Ichikawa, Katsuhiro; Kodera, Yoshie

    2009-02-01

    Screen-film systems are used in mammography even now. Therefore, it is important to measure their physical properties such as modulation transfer function (MTF) or noise power spectrum (NPS). The MTF and NPS of screen-film systems are mostly measured by using a microdensitometer. However, since microdensitometers are not commonly used in general hospitals, it is difficult to carry out these measurements regularly. In the past, Ichikawa et al. have measured and evaluated the physical properties of medical liquid crystal displays by using a high-performance digital camera. By this method, the physical properties of screen-film systems can be measured easily without using a microdensitometer. Therefore, we have proposed a simple method for measuring the MTF and NPS of screen-film systems by using a high-performance digital camera. The proposed method is based on the edge method (for evaluating MTF) and the one-dimensional fast Fourier transform (FFT) method (for evaluating NPS), respectively. As a result, the MTF and NPS evaluated by using the high-performance digital camera approximately corresponded with those evaluated by using a microdensitometer. It is possible to substitute the calculation of MTF and NPS by using a high-performance digital camera for that by using a microdensitometer. Further, this method also simplifies the evaluation of the physical properties of screen-film systems.

  17. Silicon photonics-based laser system for high performance fiber sensing

    NASA Astrophysics Data System (ADS)

    Ayotte, S.; Faucher, D.; Babin, A.; Costin, F.; Latrasse, C.; Poulin, M.; G.-Deschênes, É.; Pelletier, F.; Laliberté, M.

    2015-09-01

    We present a compact four-laser source based on low-noise, high-bandwidth Pound-Drever-Hall method and optical phase-locked loops for sensing narrow spectral features. Four semiconductor external cavity lasers in butterfly packages are mounted on a shared electronics control board and all other optical functions are integrated on a single silicon photonics chip. This high performance source is compact, automated, robust, operates over a wide temperature range and remains locked for days. A laser to resonance frequency noise of 0.25 Hz/rt-Hz is demonstrated.

  18. Development Of High Performance Head Positioner For An Optical Disk Storage System

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tetsu; Yumura, Takashi; Shimegi, Hiroo

    1987-01-01

    Design of a high performance linear head positioner fitted for an optical disk drive is reported. First, a flat and small positioner structure with a linear motor consisting of one coil and two magnetic circuits is invented. Next, a new design method to make drive force large, motor size small, and resonant frequency high is discussed by combining motor design with vibration analysis. Finally, the flat and small head positioner with 4.8 N at 1.6 A and about 6 kHz resonant frequency is developed by this design method.

  19. Constructing a LabVIEW-Controlled High-Performance Liquid Chromatography (HPLC) System: An Undergraduate Instrumental Methods Exercise

    ERIC Educational Resources Information Center

    Smith, Eugene T.; Hill, Marc

    2011-01-01

    In this laboratory exercise, students develop a LabVIEW-controlled high-performance liquid chromatography system utilizing a data acquisition device, two pumps, a detector, and fraction collector. The programming experience involves a variety of methods for interface communication, including serial control, analog-to-digital conversion, and…

  20. Relationships of Cognitive and Metacognitive Learning Strategies to Mathematics Achievement in Four High-Performing East Asian Education Systems

    ERIC Educational Resources Information Center

    Areepattamannil, Shaljan; Caleon, Imelda S.

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education…

  1. A High Resolution On-Chip Delay Sensor with Low Supply-Voltage Sensitivity for High-Performance Electronic Systems

    PubMed Central

    Sheng, Duo; Lai, Hsiu-Fan; Chan, Sheng-Min; Hong, Min-Rong

    2015-01-01

    An all-digital on-chip delay sensor (OCDS) circuit with high delay-measurement resolution and low supply-voltage sensitivity for efficient detection and diagnosis in high-performance electronic system applications is presented. Based on the proposed delay measurement scheme, the quantization resolution of the proposed OCDS can be reduced to several picoseconds. Additionally, the proposed cascade-stage delay measurement circuit can enhance immunity to supply-voltage variations of the delay measurement resolution without extra self-biasing or calibration circuits. Simulation results show that the delay measurement resolution can be improved to 1.2 ps; the average delay resolution variation is 0.55% with supply-voltage variations of ±10%. Moreover, the proposed delay sensor can be implemented in an all-digital manner, making it very suitable for high-performance electronic system applications as well as system-level integration. PMID:25688590

  2. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  3. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    NASA Astrophysics Data System (ADS)

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-05-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.

  4. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    PubMed Central

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-01-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations. PMID:27220270

  5. Determination of the kinetic rate constant of cyclodextrin supramolecular systems by high-performance affinity chromatography.

    PubMed

    Zhang, Jiwen; Li, Haiyan; Sun, Lixin; Wang, Caifen

    2015-01-01

    The kinetics of the association and dissociation are fundamental kinetic processes for the host-guest interactions (such as the drug-target and drug-excipient interactions) and the in vivo performance of supramolecules. With advantages of rapid speed, high precision and ease of automation, the high-performance affinity chromatography (HPAC) is one of the best techniques to measure the interaction kinetics of weak to moderate affinities, such as the typical host-guest interactions of drug and cyclodextrins by using a cyclodextrin-immobilized column. The measurement involves the equilibration of the cyclodextrin column, the upload and elution of the samples (non-retained substances and retained solutes) at different flow rates on the cyclodextrin and control column, and data analysis. It has been indicated that cyclodextrin-immobilized chromatography is a cost-efficient high-throughput tool for the measurement of (small molecule) drug-cyclodextrin interactions as well as the dissociation of other supramolecules with relatively weak, fast, and extensive interactions. PMID:25749964

  6. Coal-fired high performance power generating system. Quarterly progress report, July 1, 1993--September 30, 1993

    SciTech Connect

    Not Available

    1993-12-31

    This report covers work carried out under Task 3, Preliminary Research and Development, and Task 4, Commercial Generating Plant Design, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x}, and particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. The report discusses progress in cycle analysis, chemical reactor modeling, ash deposition rate calculations for HITAF (high temperature advanced furnace) convective air heater, air heater materials, and deposit initiation and growth on ceramic substrates.

  7. Coal-fired high performance power generating system. Draft quarterly progress report, January 1--March 31, 1995

    SciTech Connect

    1995-10-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal-Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x} and particulates {le} 25% NSPS; cost {ge}65% of heat input; all solid wastes benign. A crucial aspect of the authors design is the integration of the gas turbine requirements with the HITAF output and steam cycle requirements. In order to take full advantage of modern highly efficient aeroderivative gas turbines they have carried out a large number of cycle calculations to optimize their commercial plant designs for both greenfield and repowering applications.

  8. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A.; Salapura, Valentina

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  9. The use of fault tolerant and testable high performance integrated circuits for improved military electronic system availability

    NASA Astrophysics Data System (ADS)

    Bart, J. J.

    1985-08-01

    The rapid evolution of high performance Very Large Scale Integrated Circuits (VLSICs) has resulted in accelerated opportunities for improving the operational performance of military electronic systems. In addition, the microelectronics technology base also holds the promise of providing improvements in the operational availability, survivability and logistics supportability of these complex systems. The basics for these advances lies in the ability to design microelectronics based systems which are much more fault tolerant and more easily testable than those which have been developed to date. The current activities in the design of testable/fault tolerant integrated circuits are reviewed and areas for future emphasis are suggested.

  10. Development of Nano-structured Electrode Materials for High Performance Energy Storage System

    NASA Astrophysics Data System (ADS)

    Huang, Zhendong

    Systematic studies have been done to develop a low cost, environmental-friendly facile fabrication process for the preparation of high performance nanostructured electrode materials and to fully understand the influence factors on the electrochemical performance in the application of lithium ion batteries (LIBs) or supercapacitors. For LIBs, LiNi1/3Co1/3Mn1/3O2 (NCM) with a 1D porous structure has been developed as cathode material. The tube-like 1D structure consists of inter-linked, multi-facet nanoparticles of approximately 100-500nm in diameter. The microscopically porous structure originates from the honeycomb-shaped precursor foaming gel, which serves as self-template during the stepwise calcination process. The 1D NCM presents specific capacities of 153, 140, 130 and 118mAh·g-1 at current densities of 0.1C, 0.5C, 1C and 2C, respectively. Subsequently, a novel stepwise crystallization process consisting of a higher crystallization temperature and longer period for grain growth is employed to prepare single crystal NCM nanoparticles. The modified sol-gel process followed by optimized crystallization process results in significant improvements in chemical and physical characteristics of the NCM particles. They include a fully-developed single crystal NCM with uniform composition and a porous NCM architecture with a reduced degree of fusion and a large specific surface area. The NCM cathode material with these structural modifications in turn presents significantly enhanced specific capacities of 173.9, 166.9, 158.3 and 142.3mAh·g -1 at 0.1C, 0.5C, 1C and 2C, respectively. Carbon nanotube (CNT) is used to improve the relative low power capability and poor cyclic stability of NCM caused by its poor electrical conductivity. The NCM/CNT nanocomposites cathodes are prepared through simply mixing of the two component materials followed by a thermal treatment. The CNTs were functionalized to obtain uniformly-dispersed MWCNTs in the NCM matrix. The electrochemical

  11. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  12. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    SciTech Connect

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doors for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.

  13. Teacher and School Leader Effectiveness: Lessons Learned from High-Performing Systems. Issue Brief

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2011

    2011-01-01

    In an effort to find best practices in enhancing teacher effectiveness, the Alliance for Excellent Education (Alliance) and the Stanford Center for Opportunity Policy in Education (SCOPE) looked abroad at education systems that appear to have well-developed and effective systems for recruiting, preparing, developing, and retaining teachers and…

  14. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  15. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  16. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  17. High performance CCD camera system for digitalisation of 2D DIGE gels.

    PubMed

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection.

  18. High-performance sub-terahertz transmission imaging system for food inspection

    PubMed Central

    Ok, Gyeongsik; Park, Kisang; Chun, Hyang Sook; Chang, Hyun-Joo; Lee, Nari; Choi, Sung-Wook

    2015-01-01

    Unlike X-ray systems, a terahertz imaging system can distinguish low-density materials in a food matrix. For applying this technique to food inspection, imaging resolution and acquisition speed ought to be simultaneously enhanced. Therefore, we have developed the first continuous-wave sub-terahertz transmission imaging system with a polygonal mirror. Using an f-theta lens and a polygonal mirror, beam scanning is performed over a range of 150 mm. For obtaining transmission images, the line-beam is incorporated with sample translation. The imaging system demonstrates that a pattern with 2.83 mm line-width at 210 GHz can be identified with a scanning speed of 80 mm/s. PMID:26137392

  19. An ultralightweight, evacuated, load-bearing, high-performance insulation system. [for cryogenic propellant tanks

    NASA Technical Reports Server (NTRS)

    Parmley, R. T.; Cunnington, G. R., Jr.

    1978-01-01

    A new hollow-glass microsphere insulation and a flexible stainless-steel vacuum jacket were demonstrated on a flight-weight cryogenic test tank, 1.17 m in diameter. The weight of the system is three times lighter than the most advanced vacuum-jacketed design demonstrated to date, a free-standing honeycomb hard shell with a multilayer insulation system (for a Space Tug application). Design characteristics of the flexible vacuum jacket are presented along with a model describing the insulation thermal performance as a function of boundary temperatures and emittance, compressive load on the insulation and insulation gas pressure. Test data are compared with model predictions and with prior flat-plate calorimeter test results. Potential applications for this insulation system or a derivative of this system include the cryogenic Space Tug, the Single-Stage-to-Orbit Space Shuttle, LH2 fueled subsonic and hypersonic aircraft, and LNG applications.

  20. High performance CCD camera system for digitalisation of 2D DIGE gels.

    PubMed

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. PMID:27252121

  1. Energy Performance Testing of Asetek's RackCDU System at NREL's High Performance Computing Data Center

    SciTech Connect

    Sickinger, D.; Van Geet, O.; Ravenscroft, C.

    2014-11-01

    In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89 degrees F, and 48% at outlet temperatures approaching 100 degrees F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks.

  2. High performance in low-flow solar domestic hot water systems

    SciTech Connect

    Dayan, M.

    1997-12-31

    Low-flow solar hot water heating systems employ flow rates on the order of 1/5 to 1/10 of the conventional flow. Low-flow systems are of interest because the reduced flow rate allows smaller diameter tubing, which is less costly to install. Further, low-flow systems result in increased tank stratification. Lower collector inlet temperatures are achieved through stratification and the useful energy produced by the collector is increased. The disadvantage of low-flow systems is the collector heat removal factor decreases with decreasing flow rate. Many solar domestic hot water systems require an auxiliary electric source to operate a pump in order to circulate fluid through the solar collector. A photovoltaic driven pump can be used to replace the standard electrical pump. PV driven pumps provide an ideal means of controlling the flow rate, as pumps will only circulate fluid when there is sufficient radiation. Peak performance was always found to occur when the heat exchanger tank-side flow rate was approximately equal to the average load flow rate. For low collector-side flow rates, a small deviation from the optimum flow rate will dramatically effect system performance.

  3. A high-performance miniaturized time division multiplexed sensor system for remote structural health monitoring

    NASA Astrophysics Data System (ADS)

    Lloyd, Glynn D.; Everall, Lorna A.; Sugden, Kate; Bennion, Ian

    2004-09-01

    We report for the first time the design, implementation and commercial application of a hand-held optical time division multiplexed, distributed fibre Bragg grating sensor system. A unique combination of state-of-the art electronic and optical components enables system miniaturization whilst maintaining exceptional performance. Supporting more than 100 low-cost sensors per channel, the battery-powered system operates remotely via a wireless GSM link, making it ideal for real-time structural health monitoring in harsh environments. Driven by highly configurable timing electronics, an off-the-shelf telecommunications semiconductor optical amplifier performs combined amplification and gating. This novel optical configuration boasts a spatial resolution of less than 20cm and an optical signal to noise ratio of better than 30dB, yet utilizes sensors with reflectivity of only a few percent and does not require RF speed signal processing devices. This paper highlights the performance and cost advantages of a system that utilizes TDM-style mass manufactured commodity FBGs. Created in continual lengths, these sensors reduce stock inventory, eradicate application-specific array design and simplify system installation and expansion. System analysis from commercial installations in oil exploration, wind energy and vibration measurement will be presented, with results showing kilohertz interrogation speed and microstrain resolution.

  4. Structural integrity and damage assessment of high performance arresting cable systems using an embedded distributed fiber optic sensor (EDIFOS) system

    NASA Astrophysics Data System (ADS)

    Mendoza, Edgar A.; Kempen, Cornelia; Sun, Sunjian; Esterkin, Yan; Prohaska, John; Bentley, Doug; Glasgow, Andy; Campbell, Richard

    2010-04-01

    Redondo Optics in collaboration with the Cortland Cable Company, TMT Laboratories, and Applied Fiber under a US Navy SBIR project is developing an embedded distributed fiber optic sensor (EDIFOSTM) system for the real-time, structural health monitoring, damage assessment, and lifetime prediction of next generation synthetic material arresting gear cables. The EDIFOSTM system represents a new, highly robust and reliable, technology that can be use for the structural damage assessment of critical cable infrastructures. The Navy is currently investigating the use of new, all-synthetic- material arresting cables. The arresting cable is one of the most stressed components in the entire arresting gear landing system. Synthetic rope materials offer higher performance in terms of the strength-to-weight characteristics, which improves the arresting gear engine's performance resulting in reduced wind-over-deck requirements, higher aircraft bring-back-weight capability, simplified operation, maintenance, supportability, and reduced life cycle costs. While employing synthetic cables offers many advantages for the Navy's future needs, the unknown failure modes of these cables remains a high technical risk. For these reasons, Redondo Optics is investigating the use of embedded fiber optic sensors within the synthetic arresting cables to provide real-time structural assessment of the cable state, and to inform the operator when a particular cable has suffered impact damage, is near failure, or is approaching the limit of its service lifetime. To date, ROI and its collaborators have developed a technique for embedding multiple sensor fibers within the strands of high performance synthetic material cables and use the embedded fiber sensors to monitor the structural integrity of the cable structures during tensile and compressive loads exceeding over 175,000-lbsf without any damage to the cable structure or the embedded fiber sensors.

  5. Ultra-high performance, solid-state, autoradiographic image digitization and analysis system.

    PubMed

    Lear, J L; Pratt, J P; Ackermann, R F; Plotnick, J; Rumley, S

    1990-06-01

    We developed a Macintosh II-based, charge-coupled device (CCD), image digitization and analysis system for high-speed, high-resolution quantification of autoradiographic image data. A linear CCD array with 3,500 elements was attached to a precision drive assembly and mounted behind a high-uniformity lens. The drive assembly was used to sweep the array perpendicularly to its axis so that an entire 20 x 25-cm autoradiographic image-containing film could be digitized into 256 gray levels at 50-microns resolution in less than 30 sec. The scanner was interfaced to a Macintosh II computer through a specially constructed NuBus circuit board and software was developed for autoradiographic data analysis. The system was evaluated by scanning individual films multiple times, then measuring the variability of the digital data between the different scans. Image data were found to be virtually noise free. The coefficient of variation averaged less than 1%, a value significantly exceeding the accuracy of both high-speed, low-resolution, video camera (VC) systems and low-speed, high-resolution, rotating drum densitometers (RDD). Thus, the CCD scanner-Macintosh computer analysis system offers the advantage over VC systems of the ability to digitize entire films containing many autoradiograms, but with much greater speed and accuracy than achievable with RDD scanners. PMID:2385214

  6. The role of chromium and of molybdenum in high performance alloys for the design of heat recovery systems

    SciTech Connect

    Kirchheiner, R.; Stenner, F.; Schambach, L.

    1997-08-01

    In Europe, a growing demand for heat recovery systems comes from the power plant operators. Heat recovery systems are designed to be operated under acid dewpoint conditions. Only high performance alloys can withstand the corrosive load generated by the precipitation of hot, concentrated and contaminated mineral acids. In a nickel matrix, the right balance of the elements chromium and molybdenum is decisive for the corrosion resistance of such metallic materials. On the basis of laboratory and field investigations, the corrosion behavior of new and established alloys is described in this paper.

  7. Building America Best Practices Series, Volume 6: High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Ruiz, Kathleen A.; Steward, Heidi E.; Love, Pat M.

    2007-06-04

    This guide is was written by PNNL for the US Department of Energy's Building America program to provide information for residential production builders interested in building near zero energy homes. The guide provides indepth descriptions of various roof-top photovoltaic power generating systems for homes. The guide also provides extensive information on various designs of solar thermal water heating systems for homes. The guide also provides construction company owners and managers with an understanding of how solar technologies can be added to their homes in a way that is cost effective, practical, and marketable. Twelve case studies provide examples of production builders across the United States who are building energy-efficient homes with photovoltaic or solar water heating systems.

  8. A High Performance Sample Delivery System for Closed-Path Eddy Covariance Measurements

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Leggett, Graham; Alstad, Karrin; Wahl, Edward

    2016-04-01

    The Picarro G2311-f Cavity Ring-Down Spectrometer (CRDS) measures CO2, CH4 and water vapor at high frequency with parts-per-billion (ppb) sensitivity for eddy covariance, gradient, eddy accumulation measurements. In flux mode, the analyzer measures the concentration of all three species at 10 Hz with a cavity gas exchange time of 5 Hz. We developed an enhanced pneumatic sample delivery system for drawing air from the atmosphere into the cavity. The new sample delivery system maintains a 5 Hz gas exchange time, and allows for longer sample intake lines to be configured in tall tower applications (> 250 ft line at sea level). We quantified the system performance in terms of vacuum pump head room and 10-90% concentration step response for several intake line lengths at various elevations. Sample eddy covariance data are shown from an alfalfa field in Northern California, USA.

  9. Damage-mitigating control of space propulsion systems for high performance and extended life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang; Dai, Xiaowen; Carpino, Marc; Lorenzo, Carl F.

    1993-01-01

    Calculations are presented showing that a substantial improvement in service life of a reusable rocket engine can be achieved by an insignificant reduction in the system dynamic performance. The paper introduces the concept of damage mitigation and formulates a continuous-time model of fatigue damage dynamics. For control of complex mechanical systems, damage prediction and damage mitigation are carried out based on the available sensory and operational information such that the plant can be inexpensively maintained and safely and efficiently steered under diverse operating conditions. The results of simulation experiments are presented for transient operations of a reusable rocket engine.

  10. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  11. Building High-Performing and Improving Education Systems: Quality Assurance and Accountability. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    Monitoring, evaluation, and quality assurance in their various forms are seen as being one of the foundation stones of high-quality education systems. De Grauwe, writing about "school supervision" in four African countries in 2001, linked the decline in the quality of basic education to the cut in resources for supervision and support.…

  12. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  13. A bioinspired, reusable, paper-based system for high-performance large-scale evaporation.

    PubMed

    Liu, Yanming; Yu, Shengtao; Feng, Rui; Bernard, Antoine; Liu, Yang; Zhang, Yao; Duan, Haoze; Shang, Wen; Tao, Peng; Song, Chengyi; Deng, Tao

    2015-05-01

    A bioinspired, reusable, paper-based gold-nanoparticle film is fabricated by depositing an as-prepared gold-nanoparticle thin film on airlaid paper. This paper-based system with enhanced surface roughness and low thermal conductivity exhibits increased efficiency of evaporation, scale-up potential, and proven reusability. It is also demonstrated to be potentially useful in seawater desalination.

  14. Aim Higher: Lofty Goals and an Aligned System Keep a High Performer on Top

    ERIC Educational Resources Information Center

    McCommons, David P.

    2014-01-01

    Every school district is feeling the pressure to ensure higher academic achievement for all students. A focus on professional learning for an administrative team not only improves student learning and achievement, but also assists in developing a systemic approach for continued success. This is how the Fox Chapel Area School District in…

  15. Knowledge Work Supervision: Transforming School Systems into High Performing Learning Organizations.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    1997-01-01

    This article describes a new supervision model conceived to help a school system redesign its anatomy (structures), physiology (flow of information and webs of relationships), and psychology (beliefs and values). The new paradigm (Knowledge Work Supervision) was constructed by reviewing the practices of several interrelated areas: sociotechnical…

  16. A high performance low cost flow-through solar water pasteurization system

    SciTech Connect

    Duff, W.S.; Hodgson, D.

    1999-07-01

    In the rural areas of developing countries, boiling of water is the means most often used for purifying water for food preparation and drinking. However, boiling is relatively expensive, consumes substantial amounts of fossil energy and the associated wood gathering contributes to depletion of forests. Solar water pasteurization is one of the most promising approaches for a cost-effective, robust and reliable solution to these problems. The authors are developing a solar water pasteurization system based on an evacuated solar collector, and appropriately matched heat exchanger and a system for regulating the pasteurization temperature and holding time. The unit is completely passive, requiring no power of any sort. As part of the design requirements, the authors have imposed low fabrication and installation cost goals. Experimental versions have been fabricated for a materials cost of under $150 US. The authors have designed, built and experimentally evaluated several designs. The most recent testing was performed on a system using water density as the basis for regulating the pasteurization temperature and holding time. They have tested and are currently refining a new design based on an innovative regulation system which results in a system that is more compact and robust than with the water density regulation approach. Once testing is completed, they have an arrangement to place two units at a school in Uganda where they will be exposed to the actual conditions of their use in developing countries. They will report the details of current and previous designs, provide experimental results and, in the presentation in April, relate initial experiences with the units in Uganda.

  17. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  18. Whisker: a client-server high-performance multimedia research control system.

    PubMed

    Cardinal, Rudolf N; Aitken, Michael R F

    2010-11-01

    We describe an original client-server approach to behavioral research control and the Whisker system, a specific implementation of this design. The server process controls several types of hardware, including digital input/output devices, multiple graphical monitors and touchscreens, keyboards, mice, and sound cards. It provides a way to access this hardware for client programs, communicating with them via a simple text-based network protocol based on the standard Internet protocol. Clients to implement behavioral tasks may be written in any network-capable programming language. Applications to date have been in experimental psychology and behavioral and cognitive neuroscience, using rodents, humans, nonhuman primates, dogs, pigs, and birds. This system is flexible and reliable, although there are potential disadvantages in terms of complexity. Its design, features, and performance are described. PMID:21139173

  19. Isothermal Adsorption Measurement for the Development of High Performance Solid Sorption Cooling System

    NASA Astrophysics Data System (ADS)

    Saha, Bidyut Baran; Koyama, Shigeru; Alam, K. C. Amanul; Hamamoto, Yoshinori; Akisawa, Atsushi; Kashiwagi, Takao; Ng, Kim Choon; Chua, Hui Tong

    Interest in low-grade thermal heat powered solid sorption system using natural refrigerants has been increased. However, the drawbacks of these adsorption systems are their poor performance. The objective of this paper is to improve the performance of thermally powered adsorption cooling system by selecting new adsorbent-refrigerant pairs. Adsorption capacity of adsorbent-refrigerant pair depends on the thermophysical properties (pore size, pore volume and pore diameter) of adsorbent and isothermal characteristics of the adsorbent-refrigerant pair. In this paper, the thermophysical properties of three types of silica gels and three types of pitch based activated carbon fibers are determined from the nitrogen adsorption isotherms. The standard nitrogen gas adsorption/desorption measurements on various adsorbents at liquid nitrogen of temperature 77.4 K were performed. Surface area of each adsorbent was determined by the Brunauer, Emmett and Teller (BET) plot of nitrogen adsorption data. Pore size distribution was measured by the Horvath and Kawazoe (HK) method. Adsorption/desorption isotherm results showed that all three carbon fibers have no hysteresis and had better adsorption capacity in comparison with those of silica gels.

  20. High Performance Operation Control for Heat Driven Heat Pump System using Metal Hydride

    NASA Astrophysics Data System (ADS)

    Okamoto, Hideyuki; Masuda, Masao; Kozawa, Yoshiyuki

    lt is recognized that COP of heat driven heat pump system using metal hydride is 0.3-0.4 in general. In order to rise COP, we have proposed two kinds of specific operation control; the control of cycle change time according to cold heat load and the control of cooling water temperature according to outside air wet-bulb temperature. The characteristics of the heat pump system using metal hydride have grasped by various experiments and simulations. The validity of the simulation model has been confirmed by comparing with experimental results. As results of the simulations programmed for the actual operation control month by month, yearly COP has risen till 0.5-0.6 for practical scale air-conditioning system without regard for the building use. By the operation control hour by hour, yearly COP has risen till 0.6-0.65. Moreover, in the office building case added 40% sensible heat recovery, yearly COP has risen more than 0.8.

  1. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  2. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    PubMed

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center. PMID:7703940

  3. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    PubMed

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center.

  4. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    PubMed

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-01

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  5. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    PubMed

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-01

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level. PMID:25365216

  6. Commodity CPU-GPU System for Low-Cost , High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, S.; Weiss, R. M.; Barnett, G. A.; Yuen, D. A.

    2009-12-01

    We have put together a desktop computer system for under 2.5 K dollars from commodity components that consist of one quad-core CPU (Intel Core 2 Quad Q6600 Kentsfield 2.4GHz) and two high end GPUs (nVidia's GeForce GTX 295 and Tesla C1060). A 1200 watt power supply is required. On this commodity system, we have constructed an easy-to-use hybrid computing environment, in which Message Passing Interface (MPI) is used for managing the working loads, for transferring the data among different GPU devices, and for minimizing the need of CPU’s memory. The test runs using the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library show that the speed ups for double precision calculations can be greater than 10 (GPU vs. CPU) and they are bigger (> 20) for single precision calculations. In addition we have enabled the combination of Matlab with CUDA for interactive visualization through MPI, i.e., two GPU devices are used for simulation and one GPU device is used for visualizing the computing results as the simulation goes. Our experience with this commodity system has shown that running multiple applications on one GPU device or running one application across multiple GPU devices can be done as conveniently as on CPUs. With NVIDIA CEO Jen-Hsun Huang's claim that over the next 6 years GPU processing power will increase by 570x compared to the 3x for CPUs, future low-cost commodity computers such as ours may be a remedy for the long wait queues of the world's supercomputers, especially for small- and mid-scale computation. Our goal here is to explore the limits and capabilities of this emerging technology and to get ourselves ready to run large-scale simulations on the next generation of computing environment, which we believe will hybridize CPU and GPU architectures.

  7. High-performance digital triggering system for phase-controlled rectifiers

    SciTech Connect

    Olsen, R.E.

    1983-01-01

    The larger power supplies used to power accelerator magnets are most commonly polyphase rectifiers using phase control. While this method is capable of handling impressive amounts of power, it suffers from one serious disadvantage, namely that of subharmonic ripple. Since the stability of the stored beam depends to a considerable extent on the regulation of the current in the bending magnets, subharmonic ripple, especially that of low frequency, can have a detrimental effect. At the NSLS, we have constructed a 12-pulse, phase control system using digital signal processing techniques that essentially eliminates subharmonic ripple.

  8. High Performance Fuel Cell and Electrolyzer Membrane Electrode Assemblies (MEAs) for Space Energy Storage Systems

    NASA Technical Reports Server (NTRS)

    Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.

    2012-01-01

    Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.

  9. Metal-based anode for high performance bioelectrochemical systems through photo-electrochemical interaction

    NASA Astrophysics Data System (ADS)

    Liang, Yuxiang; Feng, Huajun; Shen, Dongsheng; Long, Yuyang; Li, Na; Zhou, Yuyang; Ying, Xianbin; Gu, Yuan; Wang, Yanfeng

    2016-08-01

    This paper introduces a novel composite anode that uses light to enhance current generation and accelerate biofilm formation in bioelectrochemical systems. The composite anode is composed of 316L stainless steel substrate and a nanostructured α-Fe2O3 photocatalyst (PSS). The electrode properties, current generation, and biofilm properties of the anode are investigated. In terms of photocurrent, the optimal deposition and heat-treatment times are found to be 30 min and 2 min, respectively, which result in a maximum photocurrent of 0.6 A m-2. The start-up time of the PSS is 1.2 days and the maximum current density is 2.8 A m-2, twice and 25 times that of unmodified anode, respectively. The current density of the PSS remains stable during 20 days of illumination. Confocal laser scanning microscope images show that the PSS could benefit biofilm formation, while electrochemical impedance spectroscopy indicates that the PSS reduce the charge-transfer resistance of the anode. Our findings show that photo-electrochemical interaction is a promising way to enhance the biocompatibility of metal anodes for bioelectrochemical systems.

  10. Design of high performance multivariable control systems for supermaneuverable aircraft at high angle of attack

    NASA Technical Reports Server (NTRS)

    Valavani, Lena

    1995-01-01

    The main motivation for the work under the present grant was to use nonlinear feedback linearization methods to further enhance performance capabilities of the aircraft, and robustify its response throughout its operating envelope. The idea was to use these methods in lieu of standard Taylor series linearization, in order to obtain a well behaved linearized plant, in its entire operational regime. Thus, feedback linearization was going to constitute an 'inner loop', which would then define a 'design plant model' to be compensated for robustness and guaranteed performance in an 'outer loop' application of modern linear control methods. The motivation for this was twofold; first, earlier work had shown that by appropriately conditioning the plant through conventional, simple feedback in an 'inner loop', the resulting overall compensated plant design enjoyed considerable enhancement of performance robustness in the presence of parametric uncertainty. Second, the nonlinear techniques did not have any proven robustness properties in the presence of unstructured uncertainty; a definition of robustness (and performance) is very difficult to achieve outside the frequency domain; to date, none is available for the purposes of control system design. Thus, by proper design of the outer loop, such properties could still be 'injected' in the overall system.

  11. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    SciTech Connect

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  12. A high-performance multilane microdevice system designed for the DNA forensics laboratory.

    PubMed

    Goedecke, Nils; McKenna, Brian; El-Difrawy, Sameh; Carey, Loucinda; Matsudaira, Paul; Ehrlich, Daniel

    2004-06-01

    We report preliminary testing of "GeneTrack", an instrument designed for the specific application of multiplexed short tandem repeat (STR) DNA analysis. The system supports a glass microdevice with 16 lanes of 20 cm effective length and double-T cross injectors. A high-speed galvanometer-scanned four-color detector was specially designed to accommodate the high elution rates on the microdevice. All aspects of the system were carefully matched to practical crime lab requirements for rapid reproducible analysis of crime-scene DNA evidence in conjunction with the United States DNA database (CODIS). Statistically significant studies demonstrate that an absolute, three-sigma, peak accuracy of 0.4-0.9 base pair (bp) can be achieved for the CODIS 13-locus multiplex, utilizing a single channel per sample. Only 0.5 microL of PCR product is needed per lane, a significant reduction in the consumption of costly chemicals in comparison to commercial capillary machines. The instrument is also designed to address problems in temperature-dependent decalibration and environmental sensitivity, which are weaknesses of the commercial capillary machines for the forensics application. PMID:15188257

  13. High performance electrophoresis system for site-specific entrapment of nanoparticles in a nanoarray

    NASA Astrophysics Data System (ADS)

    Han, Jin-Hee; Lakshmana, Sudheendra; Kim, Hee-Joo; Hass, Elizabeth A.; Gee, Shirley; Hammock, Bruce D.; Kennedy, Ian

    2010-02-01

    A nanoarray, integrated with an electrophoretic system, was developed to trap nanoparticles into their corresponding nanowells. This nanoarray overcomes the complications of losing the function and activity of the protein binding to the surface in conventional microarrays by using minimum amounts of sample. The nanoarray is also superior to other biosensors that use immunoassays in terms of lowering the limit of detection to the femto- or atto-molar level. In addition, our electrophoretic particle entrapment system (EPES) is able to effectively trap the nanoparticles using a low trapping force for a short duration. Therefore, good conditions for biological samples conjugated with particles can be maintained. The channels were patterned onto a bi-layer consisting of a PMMA and LOL coating on conductive indium tin oxide (ITO)-coated glass slide by using e-beam lithography. The suspensions of 170 nm-nanoparticles then were added to the chip that was connected to a positive voltage. On top of the droplet, another ITO-coated-glass slide was covered and connected to a ground terminal. Negatively charged fluorescent nanoparticles (blue emission) were selectively trapped onto the ITO surface at the bottom of the wells by following electric field lines. Numerical modeling was performed by using commercially available software, COMSOL Multiphysics to provide better understanding about the phenomenon of electrophoresis in a nanoarray. Simulation results are also useful for optimally designing a nanoarray for practical applications.

  14. A high-performance network for a distributed-control system

    NASA Astrophysics Data System (ADS)

    Cuttone, G.; Aghion, F.; Giove, D.

    1989-04-01

    Local area networks play a central rule in modern distributed-control systems for accelerators. For a superconducting cyclotron under construction at the University of Milan, an optical Ethernet network has been implemented for the interconnection of multicomputer-based stations. Controller boards, with VLSI protocol chips, have been used. The higher levels of the ISO OSI model have been implemented to suit real-time control requirements. The experimental setup for measuring the data throughput between stations will be described. The effect of memory-to-memory data transfer with respect to the packet size has been studied for packets ranging from 200 bytes to 10 Kbytes. Results, showing the data throughput to range from 0.2 to 1.1 Mbit/s, will be discussed.

  15. How to polarise all neutrons in one beam: a high performance polariser and neutron transport system

    NASA Astrophysics Data System (ADS)

    Rodriguez, D. Martin; Bentley, P. M.; Pappas, C.

    2016-09-01

    Polarised neutron beams are used in disciplines as diverse as magnetism,soft matter or biology. However, most of these applications often suffer from low flux also because the existing neutron polarising methods imply the filtering of one of the spin states, with a transmission of 50% at maximum. With the purpose of using all neutrons that are usually discarded, we propose a system that splits them according to their polarisation, flips them to match the spin direction, and then focuses them at the sample. Monte Carlo (MC) simulations show that this is achievable over a wide wavelength range and with an outstanding performance at the price of a more divergent neutron beam at the sample position.

  16. Rotatable reagent cartridge for high-performance microvalve system on a centrifugal microfluidic device.

    PubMed

    Kawai, Takayuki; Naruishi, Nahoko; Nagai, Hidenori; Tanaka, Yoshihide; Hagihara, Yoshihisa; Yoshida, Yasukazu

    2013-07-16

    Recently, microfluidic lab-on-a-CD (LabCD) has attracted attentions of researchers for its potential for pumpless, compact, and chip-inclusive on-site bioassay. To control the fluids in the LabCD, microvalves such as capillary, hydrophobic, siphon, and sacrificial valves have been employed. However, no microvalve can regulate more than one channel. In a complicated bioassay with many sequential mixing, washing, and wasting steps, thus, an intricate fluidic network with many microchannels, microvalves, and reservoirs is required, which increases assay costs in terms of both system development and chip preparation. To address this issue, we developed a rotatable reagent cartridge (RRC), which was a column-shaped tank and has several rooms to store different reagents. By embedding and rotating the RRC in the LabCD with a simple mechanical force, only the reagent in the room connected to the following channel was injected. By regulating the angle of the RRC to the LabCD, conservation and ejection of each reagent could be switched. Our developed RRC had no air vent hole, which was achieved by the gas-permeable gap between the bottle and cap parts of the RRC. The RRC could inject 230 nL-10 μL of reagents with good recoveries more than 96%. Finally, an enzymatic assay of L-lactate was demonstrated, where the number of valves and reservoirs were well minimized, significantly simplifying the fluidic system and increasing the channel integratability. Well quantitative analyses of 0-100 μM L-lactate could easily be carried out with R(2) > 0.999, indicating the practical utility of the RRC for microfluidic bioanalysis. PMID:23802811

  17. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF

  18. TheSNPpit—A High Performance Database System for Managing Large Scale SNP Data

    PubMed Central

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  19. Coal-fired high performance power generating system. Quarterly progress report, October 1--December 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    Our team has outlined a research plan based on an optimized analysis of a 250 MWe combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FUTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The Cycle Optimization effort under Task 2 outlines the evolution of our designs. The basic combined cycle approach now includes exhaust gas recirculation to quench the flue gas before it enters the convective air heater. By selecting the quench gas from a downstream location it will be clean enough and cool enough (ca. 300F) to be driven by a commercially available fan and still minimize the volume of the convective air heater. Further modeling studies on the long axial flame, under Task 3, have demonstrated that this configuration is capable of providing the necessary energy flux to the radiant air panels. This flame with its controlled mixing constrains the combustion to take place in a fuel rich environment, thus minimizing the NO{sub x} production. Recent calculations indicate that the NO{sub x} produced is low enough that the SNCR section can further reduce it to within the DOE goal of 0. 15 lbs/MBTU of fuel input. Also under Task 3 the air heater design optimization continued.

  20. Compressive sensing based Bayesian sparse channel estimation for OFDM communication systems: high performance and low complexity.

    PubMed

    Gui, Guan; Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods.

  1. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans Joachim; Duffy, Donald R.

    1989-01-01

    A summary is presented of the concentrator conceptual design work performed under a NASA-funded project. The design study centers around two basic efforts: conceptual design of a self-deploying, high-performance parabolic concentrator; and materials selection for a lightweight, shape-stable concentrator. The primary structural material selected for the concentrator is PEEK/carbon fiber composite. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes a circular shape with a void in the center. The deployable solar concentrator concept is applicable to a range of solar dynamic power systems of 25 kWe to more than 75 kWe.

  2. Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011.

    SciTech Connect

    Wheeler, Kyle Bruce; Naegle, John Hunt; Wright, Brian J.; Benner, Robert E., Jr.; Shelburg, Jeffrey Scott; Pearson, David Benjamin; Johnson, Joshua Alan; Onunkwo, Uzoma A.; Zage, David John; Patel, Jay S.

    2011-09-01

    This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

  3. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  4. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  5. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  6. Relationships of cognitive and metacognitive learning strategies to mathematics achievement in four high-performing East Asian education systems.

    PubMed

    Areepattamannil, Shaljan; Caleon, Imelda S

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education systems, memorization strategies were negatively associated with mathematics achievement, whereas control strategies were positively associated with mathematics achievement. However, the association between elaboration strategies and mathematics achievement was a mixed bag. In Shanghai-China and Korea, elaboration strategies were not associated with mathematics achievement. In Hong Kong-China and Singapore, on the other hand, elaboration strategies were negatively associated with mathematics achievement. Implications of these findings are briefly discussed.

  7. Development of a High-performance Optical System and Fluorescent Converters for High-resolution Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Sakai, T.; Yasuda, R.; Iikura, H.; Nojima, T.; Matsubayashi, M.

    Two novel devices for use in neutron imaging technique are introduced. The first one is a high-performance optical lens for video camera systems. The lens system has a magnification of 1:1 and an F value of 3. The optical resolution is less than 5 μm. The second device is a high-resolution fluorescent plate that converts neutrons into visible light. The fluorescent converter material consists of a mixture of 6LiF and ZnS(Ag) fine powder, and the thickness of the converter is material is as little as 15 μm. The surface of the plate is coated with a 1 μm-thick gadolinium oxide layer. This layer is optically transparent and acts as an electron emitter for neutron detection. Our preliminary results show that the developed optical lens and fluorescent converter plates are very promising for high-resolution neutron imaging.

  8. Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    SciTech Connect

    Engelmann, Christian; Naughton, III, Thomas J

    2013-01-01

    xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

  9. Small Delay and High Performance AD/DA Converters of Lease Circuit System for AM&FM Broadcast

    NASA Astrophysics Data System (ADS)

    Takato, Kenji; Suzuki, Dai; Ishii, Takashi; Kobayashi, Masato; Yamada, Hirokazu; Amano, Shigeru

    Many AM&FM broadcasting stations in Japan are connected by the leased circuit system of NTT. Small delay and high performance AD/DA converter was developed for the system. The system was designed based on ITU-T J.41 Recommendation (384kbps), the transmission signal is 11bit-32 kHz where the Gain-frequency characteristics between 40Hz to 15kHz have to be quite flat. The ΔΣAD/DA converter LSIs for audio application in the market today realize very high performance. However the performance is not enough for the leased circuit system. We found that it is not possible to meet the delay and Gain-frequency requirements only by using ΔΣAD/DA converter LSI in normal operation, because 15kHz the highest frequency and 16kHz Nyquist frequency are too close, therefore there are aliasing around Nyquist frequency. In this paper, we designed AD/DA architecture having small delay (1msec) and sharp cut off LPF (100dB attenuation at 16kHz, and 1500dB/Oct from 15kHz to 16kHz) by operating ΔΣAD/DA converter LSIs over-sampling rate such as 128kHz and by adding custom LPF designed Infinite Impulse Response (IIR) filter. The IIR filter is a 16th order elliptic type and it is consist of eight biquad filters in series. We described how to evaluate the stability of IIR filter theoretically by calculating frequency response, Pole and Zero Layout and impulse response of each biquad filter, and experimentally by adding overflow detection circuit on each filters and input overlord signal.

  10. Advanced real-time bus system for concurrent data paths used in high-performance image processing

    NASA Astrophysics Data System (ADS)

    Brodersen, Jorg; Palkovich, Roland; Landl, Dieter; Furtler, Johannes; Dulovits, Martin

    2004-05-01

    In this paper we present a new bus protocol satisfying extreme real time demands. It has been applied to a high performance quality inspection system which can involve up to eight sensors of various types. Thanks to the modular configuration this multi-sensor inspection system acts on the outside as a single sensor image processing system. In general, image processing systems comprise three basic functions (i) image acquisition, (ii) image processing and (iii) output of processed data. The data transfers for these three fundamental functions can be accomplished either by individual bus systems or by a single bus. In case of using a single bus the system complexity of the implementation, i.e. Development of protocols, hardware employment and EMC technical considerations, is far smaller. An important goal of the new protocol design is to support extremely fast communication between individual processing modules. For example, the input data (image acquisition) is transferred in real time to individual processing modules. Concurrent to this communication the processed data are being transferred to the output module. Therefore, the key function of this protocol is to realize concurrent data paths (data rates over 1.2 Gbit/s) by using principles of pipeline architectures and methods of time division multiplex. Moreover, the new bus protocol enables concurrent data transfers via a single bus system. In this paper the function of the new bus protocol including hardware layout and innovative bus arbiter are described in details.

  11. Coal-fired high performance power generating system. Quarterly progress report, October 1, 1994--December 31, 1994

    SciTech Connect

    1995-08-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal-Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of (1) > 47% thermal efficiency; (2) NO{sub x}, SO{sub x} and particulates {<=}25% NSPS; (3) cost {>=}65% of heat input; (4) all solid wastes benign. In our design consideration, we have tried to render all waste streams benign and if possible convert them to a commercial product. It appears that vitrified slag has commercial values. If the flyash is reinjected through the furnace, along with the dry bottom ash, then the amount of the less valuable solid waste stream (ash) can be minimized. A limitation on this procedure arises if it results in the buildup of toxic metal concentrations in either the slag, the flyash or other APCD components. We have assembled analytical tools to describe the progress of specific toxic metals in our system. The outline of the analytical procedure is presented in the first section of this report. The strengths and corrosion resistance of five candidate refractories have been studied in this quarter. Some of the results are presented and compared for selected preparation conditions (mixing, drying time and drying temperatures). A 100 hour pilot-scale stagging combustor test of the prototype radiant panel is being planned. Several potential refractory brick materials are under review and five will be selected for the first 100 hour test. The design of the prototype panel is presented along with some of the test requirements.

  12. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  13. Acetonitrile shortage: use of isopropanol as an alternative elution system for ultra/high performance liquid chromatography†

    PubMed Central

    Desai, Ankur M.; Andreae, Mark; Mullen, Douglas G.; Holl, Mark M. Banaszak; Baker, James R.

    2010-01-01

    Acetonitrile is a choice of solvent for almost all chromatographic separations. In recent years, researchers around the globe have faced an acetonitrile shortage that affected routine analytical operations. Researchers have tried to counter this shortage by applying many innovative solutions, including using ultra performance liquid chromatography (UPLC) columns that are shorter and smaller in diameter than traditional high performance liquid chromatography (HPLC) columns, thus significantly decreasing the volume of eluent required. Although utilizing UPLC in place of HPLC can alleviate the solvent demand to some extent, acetonitrile is generally thought of as the solvent of choice due to its versatility. In the following communication, we describe an alternative eluent system that uses isopropanol in place of acetonitrile as an organic modifier for routine chromatographic separations. We report here the development of an isopropanol based UPLC protocol for G5 PAMAM dendrimer based conjugates that was transferred to semi-preparative applications. PMID:21572563

  14. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  15. Enabling Interoperation of High Performance, Scientific Computing Applications: Modeling Scientific Data with the Sets & Fields (SAF) Modeling System

    SciTech Connect

    Miller, M C; Reus, J F; Matzke, R P; Arrighi, W J; Schoof, L A; Hitt, R T; Espen, P K; Butler, D M

    2001-02-07

    This paper describes the Sets and Fields (SAF) scientific data modeling system. It is a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math-oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or meshes to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshes. SAF addresses this problem by providing a small set of mathematical building blocks--sets, relations and fields--out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. A short historical perspective, a conceptual model and an overview of SAF along with preliminary results from its use in a few ASCI codes are discussed.

  16. High Performance, Low Operating Voltage n-Type Organic Field Effect Transistor Based on Inorganic-Organic Bilayer Dielectric System

    NASA Astrophysics Data System (ADS)

    Dey, A.; Singh, A.; Kalita, A.; Das, D.; Iyer, P. K.

    2016-04-01

    The performance of organic field-effect transistors (OFETs) fabricated utilizing vacuum deposited n-type conjugated molecule N,N’-Dioctadecyl-1,4,5,8-naphthalenetetracarboxylic diimide (NDIOD2) were investigated using single and bilayer dielectric system over a low-cost glass substrate. Single layer device structure consists of Poly (vinyl alcohol) (PVA) as the dielectric material whereas the bilayer systems contain two different device configuration namely aluminum oxide/Poly (vinyl alcohol) (Al2O3/PVA) and aluminum oxide/Poly (methyl mefhacrylate) (Al2O3/PMMA) in order to reduce the operating voltage and improve the device performance. It was observed that the devices with Al2O3/PMMA bilayer dielectric system and top contact aluminum electrodes exhibit excellent n-channel behaviour under vacuum compared to the other two structures with electron mobility value of 0.32 cm2/Vs, threshold voltages ~1.8 V and current on/off ratio ~104, operating under a very low voltage (6 V). These devices demonstrate highly stable electrical behaviour under multiple scans and low threshold voltage instability in vacuum condition even after 7 days than the Al2O3/PVA device structure. This low operating voltage, high performance OTFT device with bilayer dielectric system is expected to have diverse applications in the next generation of OTFT technologies.

  17. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans J.

    1991-01-01

    NASA has initiated technology development programs to develop advanced solar dynamic power systems and components for space applications beyond 2000. Conceptual design work that was performed is described. The main efforts were the: (1) conceptual design of self-deploying, high-performance parabolic concentrator; and (2) materials selection for a lightweight, shape-stable concentrator. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes an annular shape with a void in the center. This deployable concentrator concept is applicable to a range of solar dynamic power systems of 25 kW sub e to in excess of 75 kW sub e. The concept allows for a family of power system sizes all using the same packaging and deployment technique. The primary structural material selected for the concentrator is a polyethyl ethylketone/carbon fiber composite also referred to as APC-2 or Vitrex. This composite has a nearly neutral coefficient of thermal expansion which leads to shape stable characteristics under thermal gradient conditions. Substantial efforts were undertaken to produce a highly specular surface on the composite. The overall coefficient of thermal expansion of the composite laminate is near zero, but thermally induced stresses due to micro-movement of the fibers and matrix in relation to each other cause the surface to become nonspecular.

  18. A high performance system to study the influence of temperature in on-line solid-phase extraction capillary electrophoresis.

    PubMed

    Tascon, Marcos; Benavente, Fernando; Sanz-Nebot, Victoria; Gagliardi, Leonardo G

    2015-03-10

    A novel high performance system to control the temperature of the microcartridge in on-line solid phase extraction capillary electrophoresis (SPE-CE) is introduced. The mini-device consists in a thermostatic bath that fits inside of the cassette of any commercial CE instrument, while its temperature is controlled from an external circuit of liquid connecting three different water baths. The circuits are controlled from a switchboard connected to an array of electrovalves that allow to rapidly alternate the water circulation through the mini-thermostatic-bath between temperatures from 5 to 90 °C. The combination of the mini-device and the forced-air thermostatization system of the commercial CE instrument allows to optimize independently the temperature of the sample loading, the clean-up, the analyte elution and the electrophoretic separation steps. The system is used to study the effect of temperature on the C18-SPE-CE analysis of the opioid peptides, Dynorphin A (Dyn A), Endomorphin1 (END) and Met-enkephalin (MET), in both standard solutions and in spiked plasma samples. Extraction recoveries demonstrated to depend, with a non-monotonous trend, on the microcartridge temperature during the sample loading and became maximum at 60 °C. Results prove the potential of temperature control to further enhance sensitivity in SPE-CE when analytes are thermally stable. PMID:25732315

  19. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1999-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high-order characteristics of the system. In this paper only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles of attack: 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of the identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the estimated closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  20. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1996-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA-High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high order characteristics of the system. In this paper, only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles at attack : 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  1. Synthesis and Characterization of High Performance Polyimides Containing the Bicyclo(2.2.2)oct-7-ene Ring System

    NASA Technical Reports Server (NTRS)

    Alvarado, M.; Harruna, I. I.; Bota, K. B.

    1997-01-01

    Due to the difficulty in processing polyimides with high temperature stability and good solvent resistance, we have synthesized high performance polyimides with bicyclo(2.2.2)-oct-7-ene ring system which can easily be fabricated into films and fibers and subsequently converted to the more stable aromatic polyimides. In order to improve processability, we prepared two polyimides by reacting 1,4-phenylenediamine and 1,3phenylediamine with bicyclo(2.2.2)-7-octene-2,3,5,6-tetracarboxylic dianhydride. The polyimides were characterized by FTIR, FTNMR, solubility and thermal analysis. Thermogravimetric analysis (TGA) showed that the 1,4-phenylenediamine and 1,3-phenylenediamine containing polyimides were stable up to 460 and 379 C, respectively under nitrogen atmosphere. No melting transitions were observed for both polyimides. The 1,4-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, methane sulfonic acid and soluble in sulfuric acid at room temperature. The 1,3-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, tetramethyl urea, N,N-dimethyl acetamide and soluble in methane sulfonic acid and sulfuric acid.

  2. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new

  3. Easy to use uncooled ¼ VGA 17 µm FPA development for high performance compact and low-power systems

    NASA Astrophysics Data System (ADS)

    Robert, P.; Tissot, JL.; Pochic, D.; Gravot, V.; Bonnaire, F.; Clerambault, H.; Durand, A.; Tinnes, S.

    2012-06-01

    The high level of accumulated expertise by ULIS and CEA/LETI on uncooled microbolometers made from amorphous silicon enables ULIS to develop ¼ VGA IRFPA formats with 17μm pixel-pitch to enable the development of small power, small weight (SWAP) and high performance IR systems. ROIC architecture will be described where innovations are widely on-chip implemented to enable an easier operation by the user. The detector configuration (integration time, windowing, gain, scanning direction...), is driven by a standard I²C link. Like most of the visible arrays, the detector adopts the HSYNC/VSYNC free-run mode of operation driven with only one master clock (MC) supplied to the ROIC which feeds back pixel, line and frame synchronizations. On-chip PROM memory for customer operational condition storage is available for detector characteristics. Low power consumption has been taken into account and less than 60 mW is possible in analog mode at 60 Hz and < 175 mW in digital mode (14 bits). A wide electrical dynamic range (2.4V) is maintained despite the use of advanced CMOS node. The specific appeal of this unit lies in the high uniformity and easy operation it provides. The reduction of the pixel-pitch turns this TEC-less ¼ VGA array into a product well adapted for high resolution and compact systems. NETD of 35 mK and thermal time constant of 10 ms have been measured leading to 350 mK.ms figure of merit. We insist on NETD trade-off with wide thermal dynamic range, as well as the high characteristics uniformity and pixel operability, achieved thanks to the mastering of the amorphous silicon technology coupled with the ROIC design. This technology node associated with advanced packaging technique, paves the way to compact low power system.

  4. High-Performance SiC/SiC Ceramic Composite Systems Developed for 1315 C (2400 F) Engine Components

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.; Yun, Hee Mann; Morscher, Gregory N.; Bhatt, Ramakrishna T.

    2004-01-01

    As structural materials for hot-section components in advanced aerospace and land-based gas turbine engines, silicon carbide (SiC) ceramic matrix composites reinforced by high performance SiC fibers offer a variety of performance advantages over current bill-of-materials, such as nickel-based superalloys. These advantages are based on the SiC/SiC composites displaying higher temperature capability for a given structural load, lower density (approximately 30- to 50-percent metal density), and lower thermal expansion. These properties should, in turn, result in many important engine benefits, such as reduced component cooling air requirements, simpler component design, reduced support structure weight, improved fuel efficiency, reduced emissions, higher blade frequencies, reduced blade clearances, and higher thrust. Under the NASA Ultra-Efficient Engine Technology (UEET) Project, much progress has been made at the NASA Glenn Research Center in identifying and optimizing two highperformance SiC/SiC composite systems. The table compares typical properties of oxide/oxide panels and SiC/SiC panels formed by the random stacking of balanced 0 degrees/90 degrees fabric pieces reinforced by the indicated fiber types. The Glenn SiC/SiC systems A and B (shaded area of the table) were reinforced by the Sylramic-iBN SiC fiber, which was produced at Glenn by thermal treatment of the commercial Sylramic SiC fiber (Dow Corning, Midland, MI; ref. 2). The treatment process (1) removes boron from the Sylramic fiber, thereby improving fiber creep, rupture, and oxidation resistance and (2) allows the boron to react with nitrogen to form a thin in situ grown BN coating on the fiber surface, thereby providing an oxidation-resistant buffer layer between contacting fibers in the fabric and the final composite. The fabric stacks for all SiC/SiC panels were provided to GE Power Systems Composites for chemical vapor infiltration of Glenn designed BN fiber coatings and conventional SiC matrices

  5. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NASA Astrophysics Data System (ADS)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  6. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  7. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted

  8. Inverse opal-inspired, nanoscaffold battery separators: a new membrane opportunity for high-performance energy storage systems.

    PubMed

    Kim, Jung-Hwan; Kim, Jeong-Hoon; Choi, Keun-Ho; Yu, Hyung Kyun; Kim, Jong Hun; Lee, Joo Sung; Lee, Sang-Young

    2014-08-13

    The facilitation of ion/electron transport, along with ever-increasing demand for high-energy density, is a key to boosting the development of energy storage systems such as lithium-ion batteries. Among major battery components, separator membranes have not been the center of attention compared to other electrochemically active materials, despite their important roles in allowing ionic flow and preventing electrical contact between electrodes. Here, we present a new class of battery separator based on inverse opal-inspired, seamless nanoscaffold structure ("IO separator"), as an unprecedented membrane opportunity to enable remarkable advances in cell performance far beyond those accessible with conventional battery separators. The IO separator is easily fabricated through one-pot, evaporation-induced self-assembly of colloidal silica nanoparticles in the presence of ultraviolet (UV)-curable triacrylate monomer inside a nonwoven substrate, followed by UV-cross-linking and selective removal of the silica nanoparticle superlattices. The precisely ordered/well-reticulated nanoporous structure of IO separator allows significant improvement in ion transfer toward electrodes. The IO separator-driven facilitation of the ion transport phenomena is expected to play a critical role in the realization of high-performance batteries (in particular, under harsh conditions such as high-mass-loading electrodes, fast charging/discharging, and highly polar liquid electrolyte). Moreover, the IO separator enables the movement of the Ragone plot curves to a more desirable position representing high-energy/high-power density, without tailoring other battery materials and configurations. This study provides a new perspective on battery separators: a paradigm shift from plain porous films to pseudoelectrochemically active nanomembranes that can influence the charge/discharge reaction.

  9. Inverse opal-inspired, nanoscaffold battery separators: a new membrane opportunity for high-performance energy storage systems.

    PubMed

    Kim, Jung-Hwan; Kim, Jeong-Hoon; Choi, Keun-Ho; Yu, Hyung Kyun; Kim, Jong Hun; Lee, Joo Sung; Lee, Sang-Young

    2014-08-13

    The facilitation of ion/electron transport, along with ever-increasing demand for high-energy density, is a key to boosting the development of energy storage systems such as lithium-ion batteries. Among major battery components, separator membranes have not been the center of attention compared to other electrochemically active materials, despite their important roles in allowing ionic flow and preventing electrical contact between electrodes. Here, we present a new class of battery separator based on inverse opal-inspired, seamless nanoscaffold structure ("IO separator"), as an unprecedented membrane opportunity to enable remarkable advances in cell performance far beyond those accessible with conventional battery separators. The IO separator is easily fabricated through one-pot, evaporation-induced self-assembly of colloidal silica nanoparticles in the presence of ultraviolet (UV)-curable triacrylate monomer inside a nonwoven substrate, followed by UV-cross-linking and selective removal of the silica nanoparticle superlattices. The precisely ordered/well-reticulated nanoporous structure of IO separator allows significant improvement in ion transfer toward electrodes. The IO separator-driven facilitation of the ion transport phenomena is expected to play a critical role in the realization of high-performance batteries (in particular, under harsh conditions such as high-mass-loading electrodes, fast charging/discharging, and highly polar liquid electrolyte). Moreover, the IO separator enables the movement of the Ragone plot curves to a more desirable position representing high-energy/high-power density, without tailoring other battery materials and configurations. This study provides a new perspective on battery separators: a paradigm shift from plain porous films to pseudoelectrochemically active nanomembranes that can influence the charge/discharge reaction. PMID:24979037

  10. Application of the Sherlock Mycobacteria Identification System using high-performance liquid chromatography in a clinical laboratory.

    PubMed

    Kellogg, J A; Bankert, D A; Withers, G S; Sweimler, W; Kiehn, T E; Pfyffer, G E

    2001-03-01

    There is a growing need for a more accurate, rapid, and cost-effective alternative to conventional tests for identification of clinical isolates of Mycobacterium species. Therefore, the ability of the Sherlock Mycobacteria Identification System (SMIS; MIDI, Inc.) using computerized software and a Hewlett-Packard series 1100 high-performance liquid chromatograph to identify mycobacteria was compared to identification using phenotypic characteristics, biochemical tests, probes (Gen-Probe, Inc.), gas-liquid chromatography, and, when necessary, PCR-restriction enzyme analysis of the 65-kDa heat shock protein gene and 16S rRNA gene sequencing. Culture, harvesting, saponification, extraction, derivatization, and chromatography were performed following MIDI's instructions. Of 370 isolates and stock cultures tested, 327 (88%) were given species names by the SMIS. SMIS software correctly identified 279 of the isolates (75% of the total number of isolates and 85% of the named isolates). The overall predictive value of accuracy (correct calls divided by total calls of a species) for SMIS species identification was 85%, ranging from only 27% (3 of 11) for M. asiaticum to 100% for species or groups including M. malmoense (8 of 8), M. nonchromogenicum (11 of 11), and the M. chelonae-abscessus complex (21 of 21). By determining relative peak height ratios (RPHRs) and relative retention times (RRTs) of selected mycolic acid peaks, as well as phenotypic properties, all 48 SMIS-misidentified isolates and 39 (91%) of the 43 unidentified isolates could be correctly identified. Material and labor costs per isolate were $10.94 for SMIS, $26.58 for probes, and $42.31 for biochemical identification. The SMIS, combined with knowledge of RPHRs, RRTs, and phenotypic characteristics, offers a rapid, reasonably accurate, cost-effective alternative to more traditional methods of mycobacterial species identification.

  11. High-performance two-axis gimbal system for free space laser communications onboard unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    Locke, Michael; Czarnomski, Mariusz; Qadir, Ashraf; Setness, Brock; Baer, Nicolai; Meyer, Jennifer; Semke, William H.

    2011-03-01

    A custom designed and manufactured gimbal with a wide field-of-view and fast response time is developed. This enhanced custom design is a 24 volt system with integrated motor controllers and drivers which offers a full 180o fieldof- view in both azimuth and elevation; this provides a more continuous tracking capability as well as increased velocities of up to 479° per second. The addition of active high-frequency vibration control, to complement the passive vibration isolation system, is also in development. The ultimate goal of this research is to achieve affordable, reliable, and secure air-to-air laser communications between two separate remotely piloted aircraft. As a proof-of-concept, the practical implementation of an air-to-ground laserbased video communications payload system flown by a small Unmanned Aerial Vehicle (UAV) will be demonstrated. A numerical tracking algorithm has been written, tested, and used to aim the airborne laser transmitter at a stationary ground-based receiver with known GPS coordinates; however, further refinement of the tracking capabilities is dependent on an improved gimbal design for precision pointing of the airborne laser transmitter. The current gimbal pointing system is a two-axis, commercial-off-the-shelf component, which is limited in both range and velocity. The current design is capable of 360o of pan and 78o of tilt at a velocity of 60o per second. The control algorithm used for aiming the gimbal is executed on a PC-104 format embedded computer onboard the payload to accurately track a stationary ground-based receiver. This algorithm autonomously calculates a line-of-sight vector in real-time by using the UAV autopilot's Differential Global Positioning System (DGPS) which provides latitude, longitude, and altitude and Inertial Measurement Unit (IMU) which provides the roll, pitch, and yaw data, along with the known Global Positioning System (GPS) location of the ground-based photodiode array receiver.

  12. Engineering development of coal-fired high performance power systems, Phase 2: Selective non-catalytic reduction system development

    SciTech Connect

    1997-02-24

    Most of the available computational models for Selective Non- Catalytic Reduction (SNCR) systems are capable of identifying injection parameters such as spray droplet size, injection angles and velocity. These results allow identification of the appropriate injection locations based on the temperature window and mixing for effective dispersion of the reagent. However, in order to quantify No{sub x} reduction and estimate the potential for ammonia slip, a kinetic model must be coupled with the mixing predictions. Typically, reaction mechanisms for SNCR consist of over 100 elementary steps occurring between approximately 30 different species. Trying to model a mechanism of this size is not practical. This ABB project incorporated development of SNCR systems including NO{sub x} reduction and ammonia slip. The model was validated using data collected from a large-scale experimental test facility. The model developed under this project can be utilized for the SNCR system design applicable to HIPPS. The HITAF design in the HIPPS project includes low NO{sub x} firing system in the coal combustor and both selective non-catalytic reduction (SNCR) downstream of the radiant heating section and selective catalytic reduction in a lower temperature zone. The performance of the SNCR will dictate the capacity and capital cost requirements of the SCR.

  13. Coal-fired high performance power generating system. Quarterly progress report, April 1, 1994--June 30, 1994

    SciTech Connect

    Not Available

    1995-02-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: (1) > 47% thermal efficiency; (2) NO{sub x}, SO{sub x} and particulates {<=}25% NSPS; (3) cost {>=} 65% of heat input; (4) all solid wastes benign. In order to achieve these goals, this team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis the authors have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The efforts in Task 3.1.1 have focused on an evaluation of the various in-furnace NO{sub x} control strategies including SNCR. Experimental work on gas stabilization, air staging, reburning and optimized SNCR are presented here. By judicious combinations of all these approaches, the model predicts that the NO, goal of 0.06 lbs NO{sub a}/MBTU fuel can be met. This combination of experimental and analytical approaches provides the best perspective for a cost effective evaluation of all the NO{sub x} control strategies, including SCR. Under Task 3.1.2, work has been progressing on the design of the slag screen. The design analysis has been improved to account for tube placement and tube roughness. This latter parameter has been varied to include effects of deposit formation. Pressure drop, heat loss and screen efficiency can now be optimized. The changes in the designs of both the radiant and convective air heaters has resulted in a new appraisal of potential material requirements. This work being carried out under Task 3.1.3 has focused on high-strength cast superalloys for strength and an array of alloy and ceramic materials for corrosion-resistant coatings. An outline of the work to be performed under Task 3.1.7 Combustor Controls completes this report.

  14. Engineering development of coal-fired high performance power systems, Phase II and Phase III. Quarter progress report, April 1, 1996--June 30, 1996

    SciTech Connect

    1996-11-01

    Work is presented on the development of a coal-fired high performance power generation system by the year 2000. This report describes the design of the air heater, duct heater, system controls, slag viscosity, and design of a quench zone.

  15. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  16. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  17. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  18. High Performance Work and Learning Systems: Crafting a Worker-Centered Approach. Proceedings of a Conference (Washington, D.C., September 1991).

    ERIC Educational Resources Information Center

    Marschall, Daniel, Ed.

    A consensus that unions must develop coherent and comprehensive policies on new work systems and continuous learning in order to guide local activities, was the central theme of this conference on the interrelated issues of the high performance work organization. These proceedings include the following presentations: "Labor's Stake in High…

  19. Department of Energy Project ER25739 Final Report QoS-Enabled, High-performance Storage Systems for Data-Intensive Scientific Computing

    SciTech Connect

    Rangaswami, Raju

    2009-05-31

    This project's work resulted in the following research projects: (1) BORG - Block-reORGanization for Self-optimizing Storage Systems; (2) ABLE - Active Block Layer Extensions; (3) EXCES - EXternal Caching in Energy-Saving Storage Systems; (4) GRIO - Guaranteed-Rate I/O Scheduler. These projects together help in substantially advancing the over-arching project goal of developing 'QoS-Enabled, High-Performance Storage Systems'.

  20. A new automated method to analyze urinary 8-hydroxydeoxyguanosine by a high-performance liquid chromatography-electrochemical detector system.

    PubMed

    Kasai, Hiroshi

    2003-06-01

    A new method was developed to analyze urinary 8-hydroxydeoxyguanosine (8-OH-dG) by high-performance liquid chromatography (HPLC) coupled to an electrochemical detector (ECD). This method is unique because (i) urine is first fractionated by anion exchange chromatography (polystyrene-type resin with quaternary ammonium group, sulfate form) before analysis by reverse phase chromatography; and (ii) the 8-OH-dG fraction in the first HPLC is precisely and automatically collected based on the added ribonucleoside 8-hydroxyguanosine marker peak, which elutes 4-5 min earlier. Up to 1,000 human urine samples can be continuously analyzed with high accuracy within a few months. This method will be useful for studies in radiotherapy, molecular epidemiology, risk assessment, and health promotion.

  1. Speciation of chromium in environmental samples by dual electromembrane extraction system followed by high performance liquid chromatography.

    PubMed

    Safari, Meysam; Nojavan, Saeed; Davarani, Saied Saeed Hosseiny; Morteza-Najarian, Amin

    2013-07-30

    This study proposes the dual electromembrane extraction followed by high performance liquid chromatography for selective separation-preconcentration of Cr(VI) and Cr(III) in different environmental samples. The method was based on the electrokinetic migration of chromium species toward the electrodes with opposite charge into the two different hollow fibers. The extractant was then complexed with ammonium pyrrolidinedithiocarbamate for HPLC analysis. The effects of analytical parameters including pH, type of organic solvent, sample volume, stirring rate, time of extraction and applied voltage were investigated. The results showed that Cr(III) and Cr(VI) could be simultaneously extracted into the two different hollow fibers. Under optimized conditions, the analytes were quantified by HPLC instrument, with acceptable linearity ranging from 20 to 500 μg L(-1) (R(2) values≥0.9979), and repeatability (RSD) ranging between 9.8% and 13.7% (n=5). Also, preconcentration factors of 21.8-33 that corresponded to recoveries ranging from 31.1% to 47.2% were achieved for Cr(III) and Cr(VI), respectively. The estimated detection limits (S/N ratio of 3:1) were less than 5.4 μg L(-1). Finally, the proposed method was successfully applied to determine Cr(III) and Cr(VI) species in some real water samples. PMID:23856230

  2. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    (LEED®) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

  3. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  4. Identification of high performance and component technology for space electrical power systems for use beyond the year 2000

    NASA Technical Reports Server (NTRS)

    Maisel, James E.

    1988-01-01

    Addressed are some of the space electrical power system technologies that should be developed for the U.S. space program to remain competitive in the 21st century. A brief historical overview of some U.S. manned/unmanned spacecraft power systems is discussed to establish the fact that electrical systems are and will continue to become more sophisticated as the power levels appoach those on the ground. Adaptive/Expert power systems that can function in an extraterrestrial environment will be required to take an appropriate action during electrical faults so that the impact is minimal. Manhours can be reduced significantly by relinquishing tedious routine system component maintenance to the adaptive/expert system. By cataloging component signatures over time this system can set a flag for a premature component failure and thus possibly avoid a major fault. High frequency operation is important if the electrical power system mass is to be cut significantly. High power semiconductor or vacuum switching components will be required to meet future power demands. System mass tradeoffs have been investigated in terms of operating at high temperature, efficiency, voltage regulation, and system reliability. High temperature semiconductors will be required. Silicon carbide materials will operate at a temperature around 1000 K and the diamond material up to 1300 K. The driver for elevated temperature operation is that radiator mass is reduced significantly because of inverse temperature to the fourth power.

  5. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  6. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  7. An open, parallel I/O computer as the platform for high-performance, high-capacity mass storage systems

    NASA Technical Reports Server (NTRS)

    Abineri, Adrian; Chen, Y. P.

    1992-01-01

    APTEC Computer Systems is a Portland, Oregon based manufacturer of I/O computers. APTEC's work in the context of high density storage media is on programs requiring real-time data capture with low latency processing and storage requirements. An example of APTEC's work in this area is the Loral/Space Telescope-Data Archival and Distribution System. This is an existing Loral AeroSys designed system, which utilizes an APTEC I/O computer. The key attributes of a system architecture that is suitable for this environment are as follows: (1) data acquisition alternatives; (2) a wide range of supported mass storage devices; (3) data processing options; (4) data availability through standard network connections; and (5) an overall system architecture (hardware and software designed for high bandwidth and low latency). APTEC's approach is outlined in this document.

  8. High performance satellite networks

    NASA Astrophysics Data System (ADS)

    Helm, Neil R.; Edelson, Burton I.

    1997-06-01

    The high performance satellite communications networks of the future will have to be interoperable with terrestrial fiber cables. These satellite networks will evolve from narrowband analogue formats to broadband digital transmission schemes, with protocols, algorithms and transmission architectures that will segment the data into uniform cells and frames, and then transmit these data via larger and more efficient synchronous optional (SONET) and asynchronous transfer mode (ATM) networks that are being developed for the information "superhighway". These high performance satellite communications and information networks are required for modern applications, such as electronic commerce, digital libraries, medical imaging, distance learning, and the distribution of science data. In order for satellites to participate in these information superhighway networks, it is essential that they demonstrate their ability to: (1) operate seamlessly with heterogeneous architectures and applications, (2) carry data at SONET rates with the same quality of service as optical fibers, (3) qualify transmission delay as a parameter not a problem, and (4) show that satellites have several performance and economic advantages over fiber cable networks.

  9. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  10. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  11. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column.

    PubMed

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples.

  12. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples.

  13. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    SciTech Connect

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D. R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A. F.; Ratti, A.; Sabbi, G. L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-08-17

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb{sub 3}Sn dipole.

  14. High Performance Hydrometeorological Modeling, Land Data Assimilation and Parameter Estimation with the Land Information System at NASA/GSFC

    NASA Astrophysics Data System (ADS)

    Peters-Lidard, C. D.; Kumar, S. V.; Santanello, J. A.; Tian, Y.; Rodell, M.; Mocko, D.; Reichle, R.

    2008-12-01

    The Land Information System (LIS; http://lis.gsfc.nasa.gov; Kumar et al., 2006; Peters-Lidard et al., 2007) is a flexible land surface modeling framework that has been developed with the goal of integrating satellite- and ground-based observational data products and advanced land surface modeling techniques to produce optimal fields of land surface states and fluxes. The LIS software was the co-winner of NASA's 2005 Software of the Year award. LIS facilitates the integration of observations from Earth-observing systems and predictions and forecasts from Earth System and Earth science models into the decision-making processes of partnering agency and national organizations. Due to its flexible software design, LIS can serve both as a Problem Solving Environment (PSE) for hydrologic research to enable accurate global water and energy cycle predictions, and as a Decision Support System (DSS) to generate useful information for application areas including disaster management, water resources management, agricultural management, numerical weather prediction, air quality and military mobility assessment. LIS has evolved from two earlier efforts - North American Land Data Assimilation System (NLDAS; Mitchell et al. 2004) and Global Land Data Assimilation System (GLDAS; Rodell et al. 2004) that focused primarily on improving numerical weather prediction skills by improving the characterization of the land surface conditions. Both of these systems, now use specific configurations of the LIS software in their current implementations. LIS not only consolidates the capabilities of these two systems, but also enables a much larger variety of configurations with respect to horizontal spatial resolution, input datasets and choice of land surface model through 'plugins'. In addition to these capabilities, LIS has also been demonstrated for parameter estimation (Peters-Lidard et al., 2008; Santanello et al., 2007) and data assimilation (Kumar et al., 2008). Examples and case studies

  15. Low cost, high performance white-light fiber-optic hydrophone system with a trackable working point.

    PubMed

    Ma, Jinyu; Zhao, Meirong; Huang, Xinjing; Bae, Hyungdae; Chen, Yongyao; Yu, Miao

    2016-08-22

    A working-point trackable fiber-optic hydrophone with high acoustic resolution is proposed and experimentally demonstrated. The sensor is based on a polydimethylsiloxane (PDMS) cavity molded at the end of a single-mode fiber, acting as a low-finesse Fabry-Perot (FP) interferometer. The working point tracking is achieved by using a low cost white-light interferometric system with a simple tunable FP filter. By real-time adjusting the optical path difference of the FP filter, the sensor working point can be kept at its highest sensitivity point. This helps address the sensor working point drift due to hydrostatic pressure, water absorption, and/or temperature changes. It is demonstrated that the sensor system has a high resolution with a minimum detectable acoustic pressure of 148 Pa and superior stability compared to a system using a tunable laser. PMID:27557180

  16. Use of Microdialysis-Based Continuous Glucose Monitoring to Drive Real-Time Semi-Closed-Loop Insulin Infusion

    PubMed Central

    Freckmann, Guido; Jendrike, Nina; Buck, Harvey; Bousamra, Steven; Galley, Paul; Thukral, Ajay; Wagner, Robin; Weinert, Stefan; Haug, Cornelia

    2014-01-01

    Continuous glucose monitoring (CGM) and automated insulin delivery may make diabetes management substantially easier, if the quality of the resulting therapy remains adequate. In this study, a semi-closed-loop control algorithm was used to drive insulin therapy and its quality was compared to that of subject-directed therapy. Twelve subjects stayed at the study site for approximately 70 hours and were provided with the investigational Automated Pancreas System Test Stand (APS-TS), which was used to calculate insulin dosage recommendations automatically. These recommendations were based on microdialysis CGM values and common diabetes therapy parameters. For the first half of their stay, the subjects directed their diabetes therapy themselves, whereas for the second half, the insulin recommendations were delivered by the APS-TS (so-called algorithm-driven therapy). During subject-directed therapy, the mean glucose was 114 mg/dl compared to 125 mg/dl during algorithm-driven therapy. Time in target (90 to 150 mg/dl) was approximately 46% during subject-directed therapy and approximately 58% during algorithm-driven therapy. When subjects directed their therapy, approximately 2 times more hypoglycemia interventions (oral administration of carbohydrates) were required than during algorithm-driven therapy. No hyperglycemia interventions (delivery of addition insulin) were necessary during subject-directed therapy, while during algorithm-driven therapy, 2 hyperglycemia interventions were necessary. The APS-TS was able to adequately control glucose concentrations in the subjects. Time in target was at least comparable or moderately higher during closed-loop control and markedly fewer hypoglycemia interventions were required, thus increasing patient safety. PMID:25205589

  17. Making resonance a common case: a high-performance implementation of collective I/O on parallel file systems

    SciTech Connect

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2009-01-01

    Collective I/O is a widely used technique to improve I/O performance in parallel computing. It can be implemented as a client-based or server-based scheme. The client-based implementation is more widely adopted in MPI-IO software such as ROMIO because of its independence from the storage system configuration and its greater portability. However, existing implementations of client-side collective I/O do not take into account the actual pattern offile striping over multiple I/O nodes in the storage system. This can cause a significant number of requests for non-sequential data at I/O nodes, substantially degrading I/O performance. Investigating the surprisingly high I/O throughput achieved when there is an accidental match between a particular request pattern and the data striping pattern on the I/O nodes, we reveal the resonance phenomenon as the cause. Exploiting readily available information on data striping from the metadata server in popular file systems such as PVFS2 and Lustre, we design a new collective I/O implementation technique, resonant I/O, that makes resonance a common case. Resonant I/O rearranges requests from multiple MPI processes to transform non-sequential data accesses on I/O nodes into sequential accesses, significantly improving I/O performance without compromising the independence ofa client-based implementation. We have implemented our design in ROMIO. Our experimental results show that the scheme can increase I/O throughput for some commonly used parallel I/O benchmarks such as mpi-io-test and ior-mpi-io over the existing implementation of ROMIO by up to 157%, with no scenario demonstrating significantly decreased performance.

  18. MO-G-17A-01: Innovative High-Performance PET Imaging System for Preclinical Imaging and Translational Researches

    SciTech Connect

    Sun, X; Lou, K; Deng, Z; Shao, Y

    2014-06-15

    Purpose: To develop a practical and compact preclinical PET with innovative technologies for substantially improved imaging performance required for the advanced imaging applications. Methods: Several key components of detector, readout electronics and data acquisition have been developed and evaluated for achieving leapfrogged imaging performance over a prototype animal PET we had developed. The new detector module consists of an 8×8 array of 1.5×1.5×30 mm{sup 3} LYSO scintillators with each end coupled to a latest 4×4 array of 3×3 mm{sup 2} Silicon Photomultipliers (with ∼0.2 mm insensitive gap between pixels) through a 2.0 mm thick transparent light spreader. Scintillator surface and reflector/coupling were designed and fabricated to reserve air-gap to achieve higher depth-of-interaction (DOI) resolution and other detector performance. Front-end readout electronics with upgraded 16-ch ASIC was newly developed and tested, so as the compact and high density FPGA based data acquisition and transfer system targeting 10M/s coincidence counting rate with low power consumption. The new detector module performance of energy, timing and DOI resolutions with the data acquisition system were evaluated. Initial Na-22 point source image was acquired with 2 rotating detectors to assess the system imaging capability. Results: No insensitive gaps at the detector edge and thus it is capable for tiling to a large-scale detector panel. All 64 crystals inside the detector were clearly separated from a flood-source image. Measured energy, timing, and DOI resolutions are around 17%, 2.7 ns and 1.96 mm (mean value). Point source image is acquired successfully without detector/electronics calibration and data correction. Conclusion: Newly developed advanced detector and readout electronics will be enable achieving targeted scalable and compact PET system in stationary configuration with >15% sensitivity, ∼1.3 mm uniform imaging resolution, and fast acquisition counting rate

  19. Compensation of Wave-Induced Motion and Force Phenomena for Ship-Based High Performance Robotic and Human Amplifying Systems

    SciTech Connect

    Love, LJL

    2003-09-24

    The decrease in manpower and increase in material handling needs on many Naval vessels provides the motivation to explore the modeling and control of Naval robotic and robotic assistive devices. This report addresses the design, modeling, control and analysis of position and force controlled robotic systems operating on the deck of a moving ship. First we provide background information that quantifies the motion of the ship, both in terms of frequency and amplitude. We then formulate the motion of the ship in terms of homogeneous transforms. This transformation provides a link between the motion of the ship and the base of a manipulator. We model the kinematics of a manipulator as a serial extension of the ship motion. We then show how to use these transforms to formulate the kinetic and potential energy of a general, multi-degree of freedom manipulator moving on a ship. As a demonstration, we consider two examples: a one degree-of-freedom system experiencing three sea states operating in a plane to verify the methodology and a 3 degree of freedom system experiencing all six degrees of ship motion to illustrate the ease of computation and complexity of the solution. The first series of simulations explore the impact wave motion has on tracking performance of a position controlled robot. We provide a preliminary comparison between conventional linear control and Repetitive Learning Control (RLC) and show how fixed time delay RLC breaks down due to the varying nature wave disturbance frequency. Next, we explore the impact wave motion disturbances have on Human Amplification Technology (HAT). We begin with a description of the traditional HAT control methodology. Simulations show that the motion of the base of the robot, due to ship motion, generates disturbances forces reflected to the operator that significantly degrade the positioning accuracy and resolution at higher sea states. As with position-controlled manipulators, augmenting the control with a Repetitive

  20. High performance liquid level monitoring system based on polymer fiber Bragg gratings embedded in silicone rubber diaphragms

    NASA Astrophysics Data System (ADS)

    Marques, Carlos A. F.; Peng, Gang-Ding; Webb, David J.

    2015-05-01

    Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using

  1. High performance seizure-monitoring system using a vibration sensor and videotape recording: behavioral analysis of genetically epileptic rats.

    PubMed

    Amano, S; Yokoyama, M; Torii, R; Fukuoka, J; Tanaka, K; Ihara, N; Hazama, F

    1997-06-01

    A new seizure-monitoring apparatus containing a piezoceramic vibration sensor combined with videotape recording was developed. Behavioral analysis of Ihara's genetically epileptic rat (IGER), which is a recently developed novel mutant with spontaneously limbic-like seizures, was performed using this new device. Twenty 8-month-old male IGERs were monitored continuously for 72 h. Abnormal behaviors were detected by use of a vibration recorder, and epileptic seizures were confirmed by videotape recordings taken synchronously with vibration recording. Representative forms of seizures were generalized convulsions and circling seizures. Generalized convulsions were found in 13 rats, and circling seizures in 7 of 20 animals. Two rats had generalized and circling seizures, and two rats did not have seizures. Although there was no apparent circadian rhythm to the generalized seizures, circling seizures occurred mostly between 1800 and 0800 h. A correlation between the sleep-wake cycle and the occurrence of circling seizures seems likely. Without exception, all the seizure actions were recorded by the vibration recorder and the videotape recorder. To eliminate the risk of a false-negative result, investigators scrutinized the information obtained from the vibration sensor and the videotape recorder. The newly developed seizure-monitoring system was found to facilitate detailed analysis of epileptic seizures in rats.

  2. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF). Volume 1, Final report

    SciTech Connect

    1996-02-01

    A major objective of the coal-fired high performance power systems (HIPPS) program is to achieve significant increases in the thermodynamic efficiency of coal use for electric power generation. Through increased efficiency, all airborne emissions can be decreased, including emissions of carbon dioxide. High Performance power systems as defined for this program are coal-fired, high efficiency systems where the combustion products from coal do not contact the gas turbine. Typically, this type of a system will involve some indirect heating of gas turbine inlet air and then topping combustion with a cleaner fuel. The topping combustion fuel can be natural gas or another relatively clean fuel. Fuel gas derived from coal is an acceptable fuel for the topping combustion. The ultimate goal for HIPPS is to, have a system that has 95 percent of its heat input from coal. Interim systems that have at least 65 percent heat input from coal are acceptable, but these systems are required to have a clear development path to a system that is 95 percent coal-fired. A three phase program has been planned for the development of HIPPS. Phase 1, reported herein, includes the development of a conceptual design for a commercial plant. Technical and economic feasibility have been analysed for this plant. Preliminary R&D on some aspects of the system were also done in Phase 1, and a Research, Development and Test plan was developed for Phase 2. Work in Phase 2 include s the testing and analysis that is required to develop the technology base for a prototype plant. This work includes pilot plant testing at a scale of around 50 MMBtu/hr heat input. The culmination of the Phase 2 effort will be a site-specific design and test plan for a prototype plant. Phase 3 is the construction and testing of this plant.

  3. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  4. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  5. Ionic liquid-based aqueous two-phase system, a sample pretreatment procedure prior to high-performance liquid chromatography of opium alkaloids.

    PubMed

    Li, Shehong; He, Chiyang; Liu, Huwei; Li, Kean; Liu, Feng

    2005-11-01

    An ionic liquid, 1-butyl-3-methylimidazolium chloride ([C4 mim]Cl)/salt aqueous two-phase systems (ATPS) was presented as a simple, rapid and effective sample pretreatment technique coupled with high-performance liquid chromatography (HPLC) for analysis of the major opium alkaloids in Pericarpium papaveris. To find optimal conditions, the partition behaviors of codeine and papaverine in ionic liquid/salt aqueous two-phase systems were investigated. Various factors were considered systematically, and the results indicated that both the pH value and the salting-out ability of salt had great influence on phase separation. The recoveries of codeine and papaverine were 90.0-100.2% and 99.3-102.0%, respectively, from aqueous samples of P. papaveris by the proposed method. PMID:16143571

  6. High Performance Astrophysics Computing

    NASA Astrophysics Data System (ADS)

    Capuzzo-Dolcetta, R.; Arca-Sedda, M.; Mastrobuono-Battisti, A.; Punzo, D.; Spera, M.

    2012-07-01

    The application of high end computing to astrophysical problems, mainly in the galactic environment, is developing for many years at the Dep. of Physics of Sapienza Univ. of Roma. The main scientific topic is the physics of self gravitating systems, whose specific subtopics are: i) celestial mechanics and interplanetary probe transfers in the solar system; ii) dynamics of globular clusters and of globular cluster systems in their parent galaxies; iii) nuclear clusters formation and evolution; iv) massive black hole formation and evolution; v) young star cluster early evolution. In this poster we describe the software and hardware computational resources available in our group and how we are developing both software and hardware to reach the scientific aims above itemized.

  7. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  8. Determination of sunset yellow and tartrazine in food samples by combining ionic liquid-based aqueous two-phase system with high performance liquid chromatography.

    PubMed

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01-50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method. PMID:25538857

  9. Determination of Sunset Yellow and Tartrazine in Food Samples by Combining Ionic Liquid-Based Aqueous Two-Phase System with High Performance Liquid Chromatography

    PubMed Central

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01–50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method. PMID:25538857

  10. Developing collective customer knowledge and service climate: The interaction between service-oriented high-performance work systems and service leadership.

    PubMed

    Jiang, Kaifeng; Chuang, Chih-Hsun; Chiao, Yu-Ching

    2015-07-01

    This study theorized and examined the influence of the interaction between Service-Oriented high-performance work systems (HPWSs) and service leadership on collective customer knowledge and service climate. Using a sample of 569 employees and 142 managers in footwear retail stores, we found that Service-Oriented HPWSs and service leadership reduced the influences of one another on collective customer knowledge and service climate, such that the positive influence of service leadership on collective customer knowledge and service climate was stronger when Service-Oriented HPWSs were lower than when they were higher or the positive influence of Service-Oriented HPWSs on collective customer knowledge and service climate was stronger when service leadership was lower than when it was higher. We further proposed and found that collective customer knowledge and service climate were positively related to objective financial outcomes through service performance. Implications for the literature and managerial practices are discussed.

  11. High Performance Computing Today

    SciTech Connect

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  12. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  13. Development of a high-performance, coal-fired power generating system with a pyrolysis gas and char-fired high-temperature furnace

    SciTech Connect

    Shenker, J.

    1995-11-01

    A high-performance power system (HIPPS) is being developed. This system is a coal-fired, combined-cycle plant that will have an efficiency of at least 47 percent, based on the higher heating value of the fuel. The original emissions goal of the project was for NOx and SOx to each be below 0.15 lb/MMBtu. In the Phase 2 RFP this emissions goal was reduced to 0.06 lb/MMBtu. The ultimate goal of HIPPS is to have an all-coal-fueled system, but initial versions of the system are allowed up to 35 percent heat input from natural gas. Foster Wheeler Development Corporation is currently leading a team effort with AlliedSignal, Bechtel, Foster Wheeler Energy Corporation, Research-Cottrell, TRW and Westinghouse. Previous work on the project was also done by General Electric. The HIPPS plant will use a high-Temperature Advanced Furnace (HITAF) to achieve combined-cycle operation with coal as the primary fuel. The HITAF is an atmospheric-pressure, pulverized-fuel-fired boiler/air heater. The HITAF is used to heat air for the gas turbine and also to transfer heat to the steam cycle. its design and functions are very similar to conventional PC boilers. Some important differences, however, arise from the requirements of the combined cycle operation.

  14. High Performance Medical Classifiers

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Bekakos, M. P.

    2009-08-01

    In this paper, parallelism methodologies for the mapping of machine learning algorithms derived rules on both software and hardware are investigated. Feeding the input of these algorithms with patient diseases data, medical diagnostic decision trees and their corresponding rules are outputted. These rules can be mapped on multithreaded object oriented programs and hardware chips. The programs can simulate the working of the chips and can exhibit the inherent parallelism of the chips design. The circuit of a chip can consist of many blocks, which are operating concurrently for various parts of the whole circuit. Threads and inter-thread communication can be used to simulate the blocks of the chips and the combination of block output signals. The chips and the corresponding parallel programs constitute medical classifiers, which can classify new patient instances. Measures taken from the patients can be fed both into chips and parallel programs and can be recognized according to the classification rules incorporated in the chips and the programs design. The chips and the programs constitute medical decision support systems and can be incorporated into portable micro devices, assisting physicians in their everyday diagnostic practice.

  15. Practical Applications of in Vivo and ex Vivo MRI in Toxicologic Pathology Using a Novel High-performance Compact MRI System.

    PubMed

    Tempel-Brami, Catherine; Schiffenbauer, Yael S; Nyska, Abraham; Ezov, Nati; Spector, Itai; Abramovitch, Rinat; Maronpot, Robert R

    2015-07-01

    Magnetic resonance imaging (MRI) is widely used in preclinical research and drug development and is a powerful noninvasive method for assessment of phenotypes and therapeutic efficacy in murine models of disease. In vivo MRI provides an opportunity for longitudinal evaluation of tissue changes and phenotypic expression in experimental animal models. Ex vivo MRI of fixed samples permits a thorough examination of multiple digital slices while leaving the specimen intact for subsequent conventional hematoxylin and eosin (H&E) histology. With the advent of new compact MRI systems that are designed to operate in most conventional labs without the cost, complexity, and infrastructure needs of conventional MRI systems, the possibility of MRI becoming a practical modality is now viable. The purpose of this study was to investigate the capabilities of a new compact, high-performance MRI platform (M2™; Aspect Imaging, Israel) as it relates to preclinical toxicology studies. This overview will provide examples of major organ system pathologies with an emphasis on how compact MRI can serve as an important adjunct to conventional pathology by nondestructively providing 3-dimensional (3-D) digital data sets, detailed morphological insights, and quantitative information. Comparative data using compact MRI for both in vivo and ex vivo are provided as well as validation using conventional H&E.

  16. Selective extraction and determination of vitamin B12 in urine by ionic liquid-based aqueous two-phase system prior to high-performance liquid chromatography.

    PubMed

    Berton, Paula; Monasterio, Romina P; Wuilloud, Rodolfo G

    2012-08-15

    A rapid and simple extraction technique based on aqueous two-phase system (ATPS) was developed for separation and enrichment of vitamin B(12) in urine samples. The proposed ATPS-based method involves the application of the hydrophilic ionic liquid (IL) 1-hexyl-3-methylimidazolium chloride and K(2)HPO(4). After the extraction procedure, the vitamin B(12)-enriched IL upper phase was directly injected into the high performance liquid chromatography (HPLC) system for analysis. All variables influencing the IL-based ATPS approach (e.g., the composition of ATPS, pH and temperature values) were evaluated. The average extraction efficiency was 97% under optimum conditions. Only 5.0 mL of sample and a single hydrolysis/deproteinization/extraction step were required, followed by direct injection of the IL-rich upper phase into HPLC system for vitamin B(12) determination. A detection limit of 0.09 μg mL(-1), a relative standard deviation (RSD) of 4.50% (n=10) and a linear range of 0.40-8.00 μg mL(-1) were obtained. The proposed green analytical procedure was satisfactorily applied to the analysis of samples with highly complex matrices, such as urine. Finally, the IL-ATPS technique could be considered as an efficient tool for the water-soluble vitamin B(12) extraction. PMID:22841117

  17. Determination of histamine in wines with an on-line pre-column flow derivatization system coupled to high performance liquid chromatography.

    PubMed

    García-Villar, Natividad; Saurina, Javier; Hernández-Cassou, Santiago

    2005-09-01

    A new rapid and sensitive high performance liquid chromatography (HPLC) method for determining histamine in red wine samples, based on continuous flow derivatization with 1,2-naphthoquinone-4-sulfonate (NQS), is proposed. In this system, samples are derivatized on-line in a three-channel flow manifold for reagent, buffer and sample. The reaction takes place in a PTFE coil heated at 80 degrees C and with a residence time of 2.9 min. The reaction mixture is injected directly into the chromatographic system, where the histamine derivative is separated from other aminated compounds present in the wine matrix in less than ten minutes. The HPLC procedure involves a C18 column, a binary gradient of 2% acetic acid-methanol as a mobile phase, and UV detection at 305 nm. Analytical parameters of the method are evaluated using red wine samples. The linear range is up to 66.7 mg L(-1) (r = 0.9999), the precision (RSD) is 3%, the detection limit is 0.22 mg L(-1), and the average histamine recovery is 101.5% +/- 6.7%. Commercial red wines from different Spanish regions are analyzed with the proposed method.

  18. Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung

    2012-03-01

    Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare.

  19. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  20. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  1. High-performance size exclusion chromatography with a multi-wavelength absorbance detector study on dissolved organic matter characterisation along a water distribution system.

    PubMed

    Huang, Huiping; Sawade, Emma; Cook, David; Chow, Christopher W K; Drikas, Mary; Jin, Bo

    2016-06-01

    This study examined the associations between dissolved organic matter (DOM) characteristics and potential nitrification occurrence in the presence of chloramine along a drinking water distribution system. High-performance size exclusion chromatography (HPSEC) coupled with a multiple wavelength detector (200-280nm) was employed to characterise DOM by molecular weight distribution, bacterial activity was analysed using flow cytometry, and a package of simple analytical tools, such as dissolved organic carbon, absorbance at 254nm, nitrate, nitrite, ammonia and total disinfectant residual were also applied and their applicability to indicate water quality changes in distribution systems were also evaluated. Results showed that multi-wavelength HPSEC analysis was useful to provide information about DOM character while changes in molecule weight profiles at wavelengths less than 230nm were also able to be related to other water quality parameters. Correct selection of the UV wavelengths can be an important factor for providing appropriate indicators associated with different DOM compositions. DOM molecular weight in the range of 0.2-0.5kDa measured at 210nm correlated positively with oxidised nitrogen concentration (r=0.99), and the concentrations of active bacterial cells in the distribution system (r=0.85). Our study also showed that the changes of DOM character and bacterial cells were significant in those sampling points that had decreases in total disinfectant residual. HPSEC-UV measured at 210nm and flow cytometry can detect the changes of low molecular weight of DOM and bacterial levels, respectively, when nitrification occurred within the chloraminated distribution system. PMID:27266320

  2. Monoclonal antibody heterogeneity analysis and deamidation monitoring with high-performance cation-exchange chromatofocusing using simple, two component buffer systems.

    PubMed

    Kang, Xuezhen; Kutzko, Joseph P; Hayes, Michael L; Frey, Douglas D

    2013-03-29

    The use of either a polyampholyte buffer or a simple buffer system for the high-performance cation-exchange chromatofocusing of monoclonal antibodies is demonstrated for the case where the pH gradient is produced entirely inside the column and with no external mixing of buffers. The simple buffer system used was composed of two buffering species, one which becomes adsorbed onto the column packing and one which does not adsorb, together with an adsorbed ion that does not participate in acid-base equilibrium. The method which employs the simple buffer system is capable of producing a gradual pH gradient in the neutral to acidic pH range that can be adjusted by proper selection of the starting and ending pH values for the gradient as well as the buffering species concentration, pKa, and molecular size. By using this approach, variants of representative monoclonal antibodies with isoelectric points of 7.0 or less were separated with high resolution so that the approach can serve as a complementary alternative to isoelectric focusing for characterizing a monoclonal antibody based on differences in the isoelectric points of the variants present. Because the simple buffer system used eliminates the use of polyampholytes, the method is suitable for antibody heterogeneity analysis coupled with mass spectrometry. The method can also be used at the preparative scale to collect highly purified isoelectric variants of an antibody for further study. To illustrate this, a single isoelectric point variant of a monoclonal antibody was collected and used for a stability study under forced deamidation conditions.

  3. Final Assessment of Preindustrial Solid-State Route for High-Performance Mg-System Alloys Production: Concluding the EU Green Metallurgy Project

    NASA Astrophysics Data System (ADS)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Giger, Franz; Kim, Shae K.

    2013-10-01

    The Green Metallurgy Project, a LIFE+ project co-financed by the European Union Commission, has now been completed. The purpose of the Green Metallurgy Project was to establish and assess a preindustrial process capable of using nanostructured-based high-performance Mg-Zn(Y) magnesium alloys and fully recycled eco-magnesium alloys. In this work, the Consortium presents the final outcome and verification of the completed prototype construction. To compare upstream cradle-to-grave footprints when ternary nanostructured Mg-Y-Zn alloys or recycled eco-magnesium chips are produced during the process cycle using the same equipment, a life cycle analysis was completed following the ISO 14040 methodology. During tests to fine tune the prototype machinery and compare the quality of semifinished bars produced using the scaled up system, the Buhler team produced interesting and significant results. Their tests showed the ternary Mg-Y-Zn magnesium alloys to have a highest specific strength over 6000 series wrought aluminum alloys usually employed in automotive components.

  4. A meta-analysis of country differences in the high-performance work system-business performance relationship: the roles of national culture and managerial discretion.

    PubMed

    Rabl, Tanja; Jayasinghe, Mevan; Gerhart, Barry; Kühlmann, Torsten M

    2014-11-01

    Our article develops a conceptual framework based primarily on national culture perspectives but also incorporating the role of managerial discretion (cultural tightness-looseness, institutional flexibility), which is aimed at achieving a better understanding of how the effectiveness of high-performance work systems (HPWSs) may vary across countries. Based on a meta-analysis of 156 HPWS-business performance effect sizes from 35,767 firms and establishments in 29 countries, we found that the mean HPWS-business performance effect size was positive overall (corrected r = .28) and positive in each country, regardless of its national culture or degree of institutional flexibility. In the case of national culture, the HPWS-business performance relationship was, on average, actually more strongly positive in countries where the degree of a priori hypothesized consistency or fit between an HPWS and national culture (according to national culture perspectives) was lower, except in the case of tight national cultures, where greater a priori fit of an HPWS with national culture was associated with a more positive HPWS-business performance effect size. However, in loose cultures (and in cultures that were neither tight nor loose), less a priori hypothesized consistency between an HPWS and national culture was associated with higher HPWS effectiveness. As such, our findings suggest the importance of not only national culture but also managerial discretion in understanding the HPWS-business performance relationship. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high-temperature furnace (HITAF): Volume 4. Final report

    SciTech Connect

    1996-05-01

    An outgrowth of our studies of the FWDC coal-fired high performance power systems (HIPPS) concept was the development of a concept for the repowering of existing boilers. The initial analysis of this concept indicates that it will be both technically and economically viable. A unique feature of our greenfields HIPPS concept is that it integrates the operation of a pressurized pyrolyzer and a pulverized fuel-fired boiler/air heater. Once this type of operation is achieved, there are a few different applications of this core technology. Two greenfields plant options are the base case plant and a plant where ceramic air heaters are used to extend the limit of air heating in the HITAF. The greenfields designs can be used for repowering in the conventional sense which involves replacing almost everything in the plant except the steam turbine and accessories. Another option is to keep the existing boiler and add a pyrolyzer and gas turbine to the plant. The study was done on an Eastern utility plant. The owner is currently considering replacing two units with atmospheric fluidized bed boilers, but is interested in a comparison with HIPPS technology. After repowering, the emissions levels need to be 0.25 lb SO{sub x}/MMBtu and 0.15 lb NO{sub x}/MMBtu.

  6. Laser videofluorometer system for real-time characterization of high-performance liquid chromatographic eluate. [3-hydroxy-benzo(a)pyrene

    SciTech Connect

    Skoropinski, D.B.; Callis, J.B.; Danielson, J.D.S.; Christian, G.D.

    1986-11-01

    A second generation videofluorometer has been developed for real-time characterization of high-performance liquid chromatographic eluate. The instrument features a nitrogen-laser-pumped dye laser as excitation source and quarter meter polychromator/microchannel plate-intensified diode array as fluorescence detector. The dye laser cavity is tuned with a moving-iron galvanometer scanner grating drive, permitting the laser output to be changed to any wavelength in its range in less than 40 ms. Thus, the optimum excitation wavelength can be chosen for each chromatographic region. A minimum detection limit of 13 pptr has been obtained for 3-hydroxy-benzo(a)pyrene in a conventional fluorescence cuvette with a 30-s data acquisition. For the same substance eluted chromatographically, a minimum detection limit of 50 pg has been obtained, and a linear dynamic range of greater than 3 orders of magnitude observed. An extract of soil that had been contaminated with polyaromatic hydrocarbons was analyzed as a practical test of the system, permitting the quantitation of three known species, and the identification and quantitation of a previously unknown fourth compound.

  7. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    PubMed

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance. PMID:21967297

  8. Do they see eye to eye? Management and employee perspectives of high-performance work systems and influence processes on service quality.

    PubMed

    Liao, Hui; Toya, Keiko; Lepak, David P; Hong, Ying

    2009-03-01

    Extant research on high-performance work systems (HPWSs) has primarily examined the effects of HPWSs on establishment or firm-level performance from a management perspective in manufacturing settings. The current study extends this literature by differentiating management and employee perspectives of HPWSs and examining how the two perspectives relate to employee individual performance in the service context. Data collected in three phases from multiple sources involving 292 managers, 830 employees, and 1,772 customers of 91 bank branches revealed significant differences between management and employee perspectives of HPWSs. There were also significant differences in employee perspectives of HPWSs among employees of different employment statuses and among employees of the same status. Further, employee perspective of HPWSs was positively related to individual general service performance through the mediation of employee human capital and perceived organizational support and was positively related to individual knowledge-intensive service performance through the mediation of employee human capital and psychological empowerment. At the same time, management perspective of HPWSs was related to employee human capital and both types of service performance. Finally, a branch's overall knowledge-intensive service performance was positively associated with customer overall satisfaction with the branch's service. PMID:19271796

  9. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    PubMed

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance.

  10. Engineering development of coal-fired high performance power systems, Phases 2 and 3. Quarterly progress report, October 1--December 31, 1996. Final report

    SciTech Connect

    1996-12-31

    The goals of this program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: {gt} 47% efficiency (HHV); NO{sub x}, SO{sub x}, and particulates {gt} 10% NSPS; coal providing {ge} 65% of heat input; all sold wastes benign; and cost of electricity 90% of present plant. Work reported herein is from Task 1.3 HIPPS Commercial Plant Design, Task 2,2 HITAF Air Heater, and Task 2.4 Duct Heater Design. The impact on cycle efficiency from the integration of various technology advances is presented. The criteria associated with a commercial HIPPS plant design as well as possible environmental control options are presented. The design of the HITAF air heaters, both radiative and convective, is the most critical task in the program. In this report, a summary of the effort associated with the radiative air heater designs that have been considered is provided. The primary testing of the air heater design will be carried out in the UND/EERC pilot-scale furnace; progress to date on the design and construction of the furnace is a major part of this report. The results of laboratory and bench scale activities associated with defining slag properties are presented. Correct material selection is critical for the success of the concept; the materials, both ceramic and metallic, being considered for radiant air heater are presented. The activities associated with the duct heater are also presented.

  11. Design and implementation of an automated liquid-phase microextraction-chip system coupled on-line with high performance liquid chromatography.

    PubMed

    Li, Bin; Petersen, Nickolaj Jacob; Payán, María D Ramos; Hansen, Steen Honoré; Pedersen-Bjergaard, Stig

    2014-03-01

    An automated liquid-phase microextraction (LPME) device in a chip format has been developed and coupled directly to high performance liquid chromatography (HPLC). A 10-port 2-position switching valve was used to hyphenate the LPME-chip with the HPLC autosampler, and to collect the extracted analytes, which then were delivered to the HPLC column. The LPME-chip-HPLC system was completely automated and controlled by the software of the HPLC instrument. The performance of this system was demonstrated with five alkaloids i.e. morphine, codeine, thebaine, papaverine, and noscapine as model analytes. The composition of the supported liquid membrane (SLM) and carrier was optimized in order to achieve reasonable extraction performance of all the five alkaloids. With 1-octanol as SLM solvent and with 25 mM sodium octanoate as anionic carrier, extraction recoveries for the different opium alkaloids ranged between 17% and 45%. The extraction provided high selectivity, and no interfering peaks in the chromatograms were observed when applied to human urine samples spiked with alkaloids. The detection limits using UV-detection were in the range of 1-21 ng/mL for the five opium alkaloids presented in water samples. The repeatability was within 5.0-10.8% (RSD). The membrane liquid in the LPME-chip was regenerated automatically between every third injection. With this procedure the liquid membrane in the LPME-chip was stable in 3-7 days depending on the complexity of sample solutions with continuous operation. With this LPME-chip-HPLC system, series of samples were automatically injected, extracted, separated, and detected without any operator interaction.

  12. Activities on Realization of High-Power and Steady-State ECRH System and Achievement of High Performance Plasmas in LHD

    SciTech Connect

    Shimozuma, T.; Kubo, S.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Ikeda, R.; Tamura, N.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Takita, Y.; Mutoh, T.; Minami, R.; Kariya, T.; Imai, T.; Idei, H.; Shapiro, M. A.; Temkin, R. J.; Felici, F.; Goodman, T.

    2009-11-26

    Electron Cyclotron Resonance Heating (ECRH) has contributed to the achievement of high performance plasma production, high electron temperature plasmas and sustainment of steady-state plasmas in the Large Helical Device (LHD). Our immediate targets of upgrading the ECRH system are 5 MW several seconds and 1 MW longer than one hour power injection into LHD. The improvement will greatly extend the plasma parameter regime. For that purpose, we have been promoting the development and installation of 77 GHz/1-1.5 MW/several seconds and 0.3 MW/CW gyrotrons in collaboration with University of Tsukuba. The transmission lines are re-examined and improved for high and CW power transmission. In the recent experimental campaign, two 77 GHz gyrotrons were operated. One more gyrotron, which was designed for 1.5 MW/2 s output, was constructed and is tested. We have been promoting to improve total ECRH efficiency for efficient gyrotron-power use and efficient plasma heating, e.g. a new waveguide alignment method and mode-content analysis and the feedback control of the injection polarization. In the last experimental campaign, the 77 GHz gyrotrons were used in combination with the existing 84 GHz range and 168 GHz gyrotrons. Multi-frequency ECRH system is more flexible in plasma heating experiments and diagnostics. A lot of experiments have been performed in relation to high electron temperature plasmas by realization of the core electron-root confinement (CERC), electron cyclotron current drive (ECCD), Electron Bernstein Wave heating, and steady-state plasma sustainment. Some of the experimental results are briefly described.

  13. Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation

    NASA Astrophysics Data System (ADS)

    Uneri, Ali; Schafer, Sebastian; Mirota, Daniel; Nithiananthan, Sajendra; Otake, Yoshito; Reaungamornrat, Sureerat; Yoo, Jongheun; Stayman, J. Webster; Reh, Douglas; Gallia, Gary L.; Khanna, A. Jay; Hager, Gregory; Taylor, Russell H.; Kleinszig, Gerhard; Siewerdsen, Jeffrey H.

    2011-03-01

    the development of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.

  14. Prospective Randomized Controlled Study on the Efficacy of Multimedia Informed Consent for Patients Scheduled to Undergo Green-Light High-Performance System Photoselective Vaporization of the Prostate

    PubMed Central

    Ham, Dong Yeub; Choi, Woo Suk; Song, Sang Hoon; Ahn, Young-Joon; Park, Hyoung Keun; Kim, Hyeong Gon

    2016-01-01

    Purpose The aim of this study was to evaluate the efficacy of a multimedia informed consent (IC) presentation on the understanding and satisfaction of patients who were scheduled to receive 120-W green-light high-performance system photoselective vaporization of the prostate (HPS-PVP). Materials and Methods A multimedia IC (M-IC) presentation for HPS-PVP was developed. Forty men with benign prostatic hyperplasia who were scheduled to undergo HPS-PVP were prospectively randomized to a conventional written IC group (W-IC group, n=20) or the M-IC group (n=20). The allocated IC was obtained by one certified urologist, followed by a 15-question test (maximum score, 15) to evaluate objective understanding, and questionnaires on subjective understanding (range, 0~10) and satisfaction (range, 0~10) using a visual analogue scale. Results Demographic characteristics, including age and the highest level of education, did not significantly differ between the two groups. No significant differences were found in scores reflecting the objective understanding of HPS-PVP (9.9±2.3 vs. 10.6±2.8, p=0.332) or in subjective understanding scores (7.5±2.1 vs. 8.6±1.7, p=0.122); however, the M-IC group showed higher satisfaction scores than the W-IC group (7.4±1.7 vs. 8.4±1.5, p=0.033). After adjusting for age and educational level, the M-IC group still had significantly higher satisfaction scores. Conclusions M-IC did not enhance the objective knowledge of patients regarding this surgical procedure. However, it improved the satisfaction of patients with the IC process itself. PMID:27169129

  15. Simulation of reconfigurable multifunctional continuous logic devices as advanced components of the next generation high-performance MIMO-systems for the processing and interconnection

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2013-12-01

    We consider design and modeling of hardware realizations of reconfigurable multifunctional continuous logic devices (R MCL D) as advanced components of the next generation high-performance MIMO-systems for the processing and interconnection. The R MCL D realize function of two-valued and continuous logics with current inputs and current outputs on the basis of CMOS current mirrors and circuits which realize the limited difference functions. We show advantages of such elements consisting in encoding of variables by the photocurrent levels, that allows easily providing optical inputs (by photo-detectors (PD)) and optical outputs (by LED). The conception of construction of R MCL D consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 55÷65 transistors, 1 PD and 1 LED makes the offered circuits quite compact and allows their integration in 1D and 2D arrays. In the presentation we consider the capabilities of the offered circuits, show the simulation results and possible prospects of application of the circuits in particular for time-pulse coding for multivalued, continuous, neuro-fuzzy and matrix logics. The simulation results of NOT, MIN, MAX, equivalence (EQ) and other functions, that implemented R MCL D, showed that the level of logical variables can change from 1 μA to 10 μA for low-power consumption variants. The base cell of the R MCL D have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V. Modeling of such cells in OrCad is made.

  16. Parallel implementation of inverse adding-doubling and Monte Carlo multi-layered programs for high performance computing systems with shared and distributed memory

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav; Li, Changying

    2015-09-01

    Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.

  17. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems.

  18. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  19. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  20. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  1. High-Performance Ball Bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Roger W., Jr.; Haluck, David A.; Olinger, John B.; Owen, Samuel S.; Poole, William E.

    1995-01-01

    High-performance bearing features strong, lightweight, self-lubricating cage with self-lubricating liners in ball apertures. Designed to operate at high speed (tens of thousands of revolutions per minute) in cryogenic environment like liquid-oxygen or liquid-hydrogen turbopump. Includes inner race, outer race, and cage keeping bearing balls equally spaced.

  2. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  3. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  4. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  5. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  6. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  7. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  8. DOE research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  9. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  10. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  11. Development and validation of a high-performance liquid chromatography method for the simultaneous determination of aspirin and folic acid from nano-particulate systems.

    PubMed

    Chaudhary, Abhishek; Wang, Jeffrey; Prabhu, Sunil

    2010-09-01

    Attention has shifted from the treatment of colorectal cancer (CRC) to chemoprevention using aspirin and folic acid as agents capable of preventing the onset of colon cancer. However, no sensitive analytical method exists to simultaneously quantify the two drugs when released from polymer-based nanoparticles. Thus, a rapid, highly sensitive method of high-performance liquid chromatography analysis to simultaneously detect low quantities of aspirin (hydrolyzed to salicylic acid, the active moiety) and folic acid released from biodegradable polylactide-co-glycolide (PLGA) copolymer nanoparticles was developed. Analysis was done on a reversed-phase C(18) column using a photodiode array detector at wavelengths of 233 nm (salicylic acid) and 277 nm (folic acid). The mobile phase consisted of acetonitrile-0.1% trifluoroacetic acid mixture programmed for a 30 min gradient elution analysis. In the range of 0.1-100 microg/mL, the assay showed good linearity for salicylic acid (R(2) = 0.9996) and folic acid (R(2) = 0.9998). The method demonstrated good reproducibility, intra- and inter-day precision and accuracy (99.67, 100.1%) and low values of detection (0.03, 0.01 microg/mL) and quantitation (0.1 and 0.05 microg/mL) for salicylic acid and folic acid, respectively. The suitability of the method was demonstrated by simultaneously determining salicylic acid and folic acid released from PLGA nanoparticles.

  12. High performance storable propellant resistojet

    NASA Astrophysics Data System (ADS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  13. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  14. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  15. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  16. Quantitative determination of 13 organophosphorous flame retardants and plasticizers in a wastewater treatment system by high performance liquid chromatography tandem mass spectrometry.

    PubMed

    Woudneh, Million B; Benskin, Jonathan P; Wang, Guanghui; Grace, Richard; Hamilton, M Coreen; Cosgrove, John R

    2015-06-26

    A method for quantitative determination of 13 organophosphorous compounds (OPs) was developed and applied to influent, primary sludge, activated sludge, biosolids, primary effluent and final effluent from a wastewater treatment plant (WWTP). The method involved solvent extraction followed by solid phase clean-up and analysis by high performance liquid chromatography positive electrospray ionization-tandem mass spectrometry (HPLC(+ESI)MS/MS). Replicate spike/recovery experiments revealed the method to have good accuracy (70-132%) and precision (<19% RSD) in all matrices. Detection limits of 0.1-5 ng/L for aqueous samples and 0.01-0.5 ng/g for solid samples were achieved. In the liquid waste stream ∑OP concentrations were highest in influent (5764 ng/L) followed by primary effluent (4642 ng/L), and final effluent (2328 ng/L). In the solid waste stream, the highest ∑OP concentrations were observed in biosolids (3167 ng/g dw), followed by waste activated sludge (2294 ng/g dw), and primary sludge (2128 ng/g dw). These concentrations are nearly 30-fold higher than ∑polybrominated diphenyl ether (BDE) concentrations in influents and nearly 200-fold higher than ∑BDE concentrations in effluents from other sites in Canada. Tetrekis(2-chlorethyl)dichloroisopentyldiphosphate (V6), tripropylphosphate (TnPrP), and Tris(2,3-dibromopropyl)phosphate (TDBPP) are investigated for the first time in a WWTP. While TnPrP and TDBB were not detected, V6 was observed at concentrations up to 7.9 ng/g in solid waste streams and up to 40.7 ng/L in liquid waste streams. The lack of removal of OPs during wastewater treatment is a concern due to their release into the aquatic environment. PMID:25997845

  17. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  18. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  19. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  20. High performance stationary phases for planar chromatography.

    PubMed

    Poole, Salwa K; Poole, Colin F

    2011-05-13

    The kinetic performance of stabilized particle layers, particle membranes, and thin films for thin-layer chromatography is reviewed with a focus on how layer characteristics and experimental conditions affect the observed plate height. Forced flow and pressurized planar electrochromatography are identified as the best candidates to overcome the limited performance achieved by capillary flow for stabilized particle layers. For conventional and high performance plates band broadening is dominated by molecular diffusion at low mobile phase velocities typical of capillary flow systems and by mass transfer with a significant contribution from flow anisotropy at higher flow rates typical of forced flow systems. There are few possible changes to the structure of stabilized particle layers that would significantly improve their performance for capillary flow systems while for forced flow a number of avenues for further study are identified. New media for ultra thin-layer chromatography shows encouraging possibilities for miniaturized high performance systems but the realization of their true performance requires improvements in instrumentation for sample application and detection.

  1. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  2. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  3. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  4. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  5. Electroanalysis of sulfonamides by flow injection system/high-performance liquid chromatography coupled with amperometric detection using boron-doped diamond electrode.

    PubMed

    Preechaworapun, Anchana; Chuanuwatanakul, Suchada; Einaga, Yasuaki; Grudpan, Kate; Motomizu, Shoji; Chailapakul, Orawon

    2006-02-28

    Sulfonamides (SAs) were electrochemically investigated using cyclic voltammetry at a boron-doped diamond (BDD) electrode. Comparison experiments were carried out using a glassy carbon electrode. The BDD electrode provided well-resolved oxidation, irreversible cyclic voltammograms and higher current signals when compared to the glassy carbon electrode. Results obtained from using the BDD electrode in a flow injection system coupled with amperometric detection were illustrated. The optimum potential from a hydrodynamic voltammogram was found to be 1100mV versus Ag/AgCl, which was chosen for the HPLC-amperometric system. Excellent results of linear range and detection limit were obtained. This method was also used for determination of sulfonamides in egg samples. The standard solutions of 5, 10, and 15ppm were spiked in a real sample, and percentage of recoveries was found to be between 90.0 and 107.7.

  6. Development and application of a specially designed heating system for temperature-programmed high-performance liquid chromatography using subcritical water as the mobile phase.

    PubMed

    Teutenberg, T; Goetze, H-J; Tuerk, J; Ploeger, J; Kiffmeyer, T K; Schmidt, K G; Kohorst, W gr; Rohe, T; Jansen, H-D; Weber, H

    2006-05-01

    A specially designed heating system for temperature-programmed HPLC was developed based on experimental measurements of eluent temperature inside a stainless steel capillary using a very thin thermocouple. The heating system can be operated at temperatures up to 225 degrees C and consists of a preheating, a column heating and a cooling unit. Fast cycle times after a temperature gradient can be realized by an internal silicone oil bath which cools down the preheating and column heating unit. Long-term thermal stability of a polybutadiene-coated zirconium dioxide column has been evaluated using a tubular oven in which the column was placed. The packing material was stable after 50h of operation at 185 degrees C. A mixture containing four steroids was separated at ambient conditions using a mobile phase of 25% acetonitrile:75% deionized water and a mobile phase of pure deionized water at 185 degrees C using the specially designed heating system and the PBD column. Analysis time could be drastically reduced from 17 min at ambient conditions and a flow rate of 1 mL/min to only 1.2 min at 185 degrees C and a flow rate of 5 mL/min. At these extreme conditions, no thermal mismatch was observed and peaks were not distorted, thus underlining the performance of the developed heating system. Temperature programming was performed by separating cytostatic and antibiotic drugs with a temperature gradient using only water as the mobile phase. In contrast to an isocratic elution of this mixture at room temperature, overall analysis time could be reduced two-fold from 20 to 10 min. PMID:16530210

  7. A Novel Low-Power, High-Performance, Zero-Maintenance Closed-Path Trace Gas Eddy Covariance System with No Water Vapor Dilution or Spectroscopic Corrections

    NASA Astrophysics Data System (ADS)

    Sargent, S.; Somers, J. M.

    2015-12-01

    Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.

  8. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    PubMed Central

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-01-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min−1, while the TOF in the conventional batch reaction was 0.643 min−1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes. PMID:26902657

  9. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance.

    PubMed

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-01-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min(-1), while the TOF in the conventional batch reaction was 0.643 min(-1). The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes. PMID:26902657

  10. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    NASA Astrophysics Data System (ADS)

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-02-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min‑1, while the TOF in the conventional batch reaction was 0.643 min‑1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes.

  11. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    NASA Astrophysics Data System (ADS)

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-02-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min-1, while the TOF in the conventional batch reaction was 0.643 min-1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes.

  12. High-performance solar collector

    NASA Technical Reports Server (NTRS)

    Beekley, D. C.; Mather, G. R., Jr.

    1979-01-01

    Evacuated all-glass concentric tube collector using air or liquid transfer mediums is very efficient at high temperatures. Collector can directly drive existing heating systems that are presently driven by fossil fuel with relative ease of conversion and less expense than installation of complete solar heating systems.

  13. Facilitating NASA's Use of GEIA-STD-0005-1, Performance Standard for Aerospace and High Performance Electronic Systems Containing Lead-Free Solder

    NASA Technical Reports Server (NTRS)

    Plante, Jeannete

    2010-01-01

    GEIA-STD-0005-1 defines the objectives of, and requirements for, documenting processes that assure customers and regulatory agencies that AHP electronic systems containing lead-free solder, piece parts, and boards will satisfy the applicable requirements for performance, reliability, airworthiness, safety, and certify-ability throughout the specified life of performance. It communicates requirements for a Lead-Free Control Plan (LFCP) to assist suppliers in the development of their own Plans. The Plan documents the Plan Owner's (supplier's) processes, that assure their customer, and all other stakeholders that the Plan owner's products will continue to meet their requirements. The presentation reviews quality assurance requirements traceability and LFCP template instructions.

  14. A radio-high-performance liquid chromatography dual-flow cell gamma-detection system for on-line radiochemical purity and labeling efficiency determination.

    PubMed

    Lindegren, S; Jensen, H; Jacobsson, L

    2014-04-11

    In this study, a method of determining radiochemical yield and radiochemical purity using radio-HPLC detection employing a dual-flow-cell system is evaluated. The dual-flow cell, consisting of a reference cell and an analytical cell, was constructed from two PEEK capillary coils to fit into the well of a NaI(Tl) detector. The radio-HPLC flow was directed from the injector to the reference cell allowing on-line detection of the total injected sample activity prior to entering the HPLC column. The radioactivity eluted from the column was then detected in the analytical cell. In this way, the sample will act as its own standard, a feature enabling on-line quantification of the processed radioactivity passing through the system. All data were acquired on-line via an analog signal from a rate meter using chromatographic software. The radiochemical yield and recovery could be simply and accurately determined by integration of the peak areas in the chromatogram obtained from the reference and analytical cells using an experimentally determined volume factor to correct for the effect of different cell volumes. PMID:24630054

  15. A radio-high-performance liquid chromatography dual-flow cell gamma-detection system for on-line radiochemical purity and labeling efficiency determination.

    PubMed

    Lindegren, S; Jensen, H; Jacobsson, L

    2014-04-11

    In this study, a method of determining radiochemical yield and radiochemical purity using radio-HPLC detection employing a dual-flow-cell system is evaluated. The dual-flow cell, consisting of a reference cell and an analytical cell, was constructed from two PEEK capillary coils to fit into the well of a NaI(Tl) detector. The radio-HPLC flow was directed from the injector to the reference cell allowing on-line detection of the total injected sample activity prior to entering the HPLC column. The radioactivity eluted from the column was then detected in the analytical cell. In this way, the sample will act as its own standard, a feature enabling on-line quantification of the processed radioactivity passing through the system. All data were acquired on-line via an analog signal from a rate meter using chromatographic software. The radiochemical yield and recovery could be simply and accurately determined by integration of the peak areas in the chromatogram obtained from the reference and analytical cells using an experimentally determined volume factor to correct for the effect of different cell volumes.

  16. Rapid separation of tritiated thyrotropin-releasing hormone and its catabolic products from mouse and human central nervous system tissues by high-performance liquid chromatography with radioactive flow detection.

    PubMed

    Turner, J G; Schwartz, T M; Brooks, B R

    1989-02-24

    Reversed-phase high-performance liquid chromatography with radioactive flow detection was utilized to investigate the catabolism of thyrotropin-releasing hormone (TRH) in central nervous system (CNS) tissues. Two different column/gradient solvent systems were tested: (1) octadecylsilane (ODS) with an acetic acid-acetonitrile gradient and (2) poly(styrenedivinylbenzene) (PRP-1) with a trifluoroacetic acid-acetonitrile gradient. Both systems used 1-hexanesulfonic acid as the second ion-pairing reagent and yielded excellent separation of TRH and its catabolic products, TRH acid, cyclo(histidyl-proline), histidyl-proline, proline, and prolinamide, produced in CNS tissue homogenates. The PRP-1 column with a trifluoroacetic acid-acetonitrile solvent system produced a better and more reproducible separation of TRH catabolic products than the ODS column with the acetic acid-acetonitrile solvent system. This PRP-1 technique was utilized to demonstrate different rates and products of TRH catabolism in mouse and human spinal cord compared with cerebral cortex.

  17. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  18. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  19. EDITORIAL: High performance under pressure High performance under pressure

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  20. High-performance, highly bendable MoS2 transistors with high-k dielectrics for flexible low-power systems.

    PubMed

    Chang, Hsiao-Yu; Yang, Shixuan; Lee, Jongho; Tao, Li; Hwang, Wan-Sik; Jena, Debdeep; Lu, Nanshu; Akinwande, Deji

    2013-06-25

    While there has been increasing studies of MoS2 and other two-dimensional (2D) semiconducting dichalcogenides on hard conventional substrates, experimental or analytical studies on flexible substrates has been very limited so far, even though these 2D crystals are understood to have greater prospects for flexible smart systems. In this article, we report detailed studies of MoS2 transistors on industrial plastic sheets. Transistor characteristics afford more than 100x improvement in the ON/OFF current ratio and 4x enhancement in mobility compared to previous flexible MoS2 devices. Mechanical studies reveal robust electronic properties down to a bending radius of 1 mm which is comparable to previous reports for flexible graphene transistors. Experimental investigation identifies that crack formation in the dielectric is the responsible failure mechanism demonstrating that the mechanical properties of the dielectric layer is critical for realizing flexible electronics that can accommodate high strain. Our uniaxial tensile tests have revealed that atomic-layer-deposited HfO2 and Al2O3 films have very similar crack onset strain. However, crack propagation is slower in HfO2 dielectric compared to Al2O3 dielectric, suggesting a subcritical fracture mechanism in the thin oxide films. Rigorous mechanics modeling provides guidance for achieving flexible MoS2 transistors that are reliable at sub-mm bending radius.

  1. Designing and simulation smart multifunctional continuous logic device as a basic cell of advanced high-performance sensor systems with MIMO-structure

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-01-01

    We have proposed a design and simulation of hardware realizations of smart multifunctional continuous logic devices (SMCLD) as advanced basic cells of the sensor systems with MIMO- structure for images processing and interconnection. The SMCLD realize function of two-valued, multi-valued and continuous logics with current inputs and current outputs. Such advanced basic cells realize function nonlinear time-pulse transformation, analog-to-digital converters and neural logic. We showed advantages of such elements. It's have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level. The conception of construction of SMCLD consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 50÷70 transistors, 1 PD and 1 LED makes the offered circuits quite compact. The simulation results of NOT, MIN, MAX, equivalence (EQ), normalize summation, averaging and other functions, that implemented SMCLD, showed that the level of logical variables can change from 0.1μA to 10μA for low-power consumption variants. The SMCLD have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V.

  2. A high performance thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Tijani, M. E. H.; Spoelstra, S.

    2011-11-01

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  3. High performance electrolytes for MCFC

    DOEpatents

    Kaun, T.D.; Roche, M.F.

    1999-08-24

    A carbonate electrolyte of the Li/Na or CaBaLiNa system is described. The Li/Na carbonate has a composition displaced from the eutectic composition to diminish segregation effects in a molten carbonate fuel cell. The CaBaLiNa system includes relatively small amounts of Ca{sub 2}CO{sub 3} and BaCO{sub 3}, and preferably of equimolar amounts. The presence of both Ca and BaCO{sub 3} enables lower temperature fuel cell operation. 15 figs.

  4. High performance electrolytes for MCFC

    DOEpatents

    Kaun, Thomas D.; Roche, Michael F.

    1999-01-01

    A carbonate electrolyte of the Li/Na or CaBaLiNa system. The Li/Na carbonate has a composition displaced from the eutectic composition to diminish segregation effects in a molten carbonate fuel cell. The CaBaLiNa system includes relatively small amounts of Ca.sub.2 CO.sub.3 and BaCO.sub.3, and preferably of equimolar amounts. The presence of both Ca and BaCO.sub.3 enables lower temperature fuel cell operation.

  5. A micro trapping system coupled with a high performance liquid chromatography procedure for methylamine determination in both tissue and cigarette smoke.

    PubMed

    Zhang, Yongqian; Mao, Jian; Yu, Peter H; Xiao, Shengyuan

    2012-11-01

    Both endogenous and exogenous methylamine have been found to be involved in many human disorders. The quantitative assessment of methylamine has drawn considerable interest in recent years. Although there have been many papers about the determination of methylamine, only a few of them involved cigarette smoke or mammalian tissue analysis. The major hurdles of the determination of methylamine are the collection of methylamine from samples and the differentiation of methylamine from the background compounds, e.g., biogenic amines. We have solved this problem using a micro trapping system coupled with an HPLC procedure. The interference from other biogenic amines has been avoided. The high selectivity of this method was achieved using four techniques: distillation, trapping, HPLC separation and selective detection. The chromatograms of both mouse tissues and cigarette smoke are simple, with only a few peaks. The method is easy and efficient and it has been validated and applied to the determination of methylamine in tissues of normal CD 1 mice and cigarette smoke. The methylamine contents were determined to be approximately 268.3 ng g(-1) in the liver, 429.5 ng g(-1) in the kidney and 547.4 ng g(-1) in the brain respectively. The methylamine in the cigarette smoke was approximately 213 ng to 413 ng per cigarette. These results in tissues and in cigarette smoke were found to be consistent with the data in the previous literature. To the best of our knowledge, this is the first report on a method suitable for methylamine analysis in both mammalian tissue and cigarette smoke. PMID:23101659

  6. EDITORIAL: High performance under pressure High performance under pressure

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  7. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  8. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  9. High-Performance Wireless Telemetry

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  10. Application of the Modified Clavien Classification System to 120W Greenlight High-Performance System Photoselective Vaporization of the Prostate for Benign Prostatic Hyperplasia: Is It Useful for Less-Invasive Procedures?

    PubMed Central

    Kwon, Ohseong; Park, Sohyun; Jeong, Min Young; Cho, Sung Yong

    2013-01-01

    Purpose To evaluate the accuracy and applicability of the modified Clavien classification system (CCS) in evaluating complications following photoselective vaporization of the prostate by use of the 120W GreenLight high-performance system (HPS-PVP). Materials and Methods The medical records of 342 men who underwent HPS-PVP were retrospectively analyzed. Patients were older than 40 years and had a prostate volume >30 mL and an International Prostate Symptom Score (IPSS) ≥8. Patients with prostatic malignancy, neurogenic bladder, urethral stricture, large postvoid residual volume (>250 mL), previous prostatic surgery, or urinary tract infection were excluded. All operations were done by a single surgeon, and patients were followed up for uroflowmetry and IPSS postoperatively. All complications were recorded and classified according to the modified CCS, and methods of management were also recorded. Results The patients' mean age was 71.6±7.3 years; mean prostate volume was 50.0±17.0 mL, and 95 cases (27.7%) had volumes greater than 70 mL. The mean total IPSS was 21.7±7.9 preoperatively and 12.3±8.1 at the first month postoperatively. A total of 59 patients (17.3%) experienced postoperative complications until the first month after the surgery. Among them, 49 patients (14.3%) showed grade I complications, 9 patients (2.6%) showed grade II complications, and 1 patient (0.3%) showed a grade IIIb complication. No patients had complications graded higher than IIIb. Conclusions Although the modified CCS is a useful tool for communication among clinicians in allowing comparison of surgical outcomes, this classification should be revised to gain higher accuracy and applicability in the evaluation of postoperative complications of HPS-PVP. PMID:23614060

  11. High Performance Field Reversed Configurations

    NASA Astrophysics Data System (ADS)

    Binderbauer, Michl

    2014-10-01

    The field-reversed configuration (FRC) is a prolate compact toroid with poloidal magnetic fields. FRCs could lead to economic fusion reactors with high power density, simple geometry, natural divertor, ease of translation, and possibly capable of burning aneutronic fuels. However, as in other high-beta plasmas, there are stability and confinement concerns. These concerns can be addressed by introducing and maintaining a significant fast ion population in the system. This is the approach adopted by TAE and implemented for the first time in the C-2 device. Studying the physics of FRCs driven by Neutral Beam (NB) injection, significant improvements were made in confinement and stability. Early C-2 discharges had relatively good confinement, but global power losses exceeded the available NB input power. The addition of axially streaming plasma guns, magnetic end plugs as well as advanced surface conditioning leads to dramatic reductions in turbulence driven losses and greatly improved stability. As a result, fast ion confinement significantly improved and allowed for build-up of a dominant fast particle population. Under such appropriate conditions we achieved highly reproducible, long-lived, macroscopically stable FRCs with record lifetimes. This demonstrated many beneficial effects of large orbit particles and their performance impact on FRCs Together these achievements point to the prospect of beam-driven FRCs as a path toward fusion reactors. This presentation will review and expand on key results and present context for their interpretation.

  12. High-performance liquid chromatography.

    PubMed

    Clevett, K J

    1990-01-01

    Gas chromatography has developed over the past 25 years or so into one of the most extensively used on-line analytical techniques in industrial process control and optimization. Liquid chromatography, and its several individual techniques, is firmly established in the laboratory, but its on-line process use has not developed as rapidly as GC. At the present time, only three companies (Applied Automation Inc., Dionex Corp., and Millipore Corp.) are active in this area. Nevertheless, substantial growth in on-line process LC is predicted for the next few years. The techniques of HPLC (normal-phase and reversed-phase), IEC, and SEC have great potential in industry as on-line analytical techniques, including the new field of biotechnology. Computer-based, multistream, multicomponent systems should find extensive use in pilot-plant investigations, where their ability to gather large amounts of data (on-line rather than by laboratory testing) could have important implications. In bioprocess control, undoubtedly the greatest challenge will come in the area of sample-handling technique. On-line chromatography has traditionally involved the sampling and conditioning of fairly conventional process gases and liquids. One exception is in the plastics and elastomers areas, where on-line SEC has been used for polymer MWD measurement. Here the sample is more difficult to handle, and some specialized techniques have been used. In biotechnology, we are treading new ground; nevertheless, it is hoped that some of the experience in sample handling gained in industry over the past 25 years will be of use in this new field.

  13. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  14. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  15. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  16. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  17. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  18. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  19. Properties Of High-Performance Thermoplastics

    NASA Technical Reports Server (NTRS)

    Johnston, Norman J.; Hergenrother, Paul M.

    1992-01-01

    Report presents review of principal thermoplastics (TP's) used to fabricate high-performance composites. Sixteen principal TP's considered as candidates for fabrication of high-performance composites presented along with names of suppliers, Tg, Tm (for semicrystalline polymers), and approximate maximum processing temperatures.

  20. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  1. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  2. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  3. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  4. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  5. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  6. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  7. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  8. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  9. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  10. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  11. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  12. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  13. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  14. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  15. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  16. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  17. Experience with high-performance PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Goldburgh, Mitchell M.; Head, Calvin

    1997-05-01

    Lockheed Martin (Loral) has installed PACS with associated teleradiology in several tens of hospitals. The PACS that have been installed have been the basis for a shift to filmless radiology in many of the hospitals. the basic structure for the PACS and the teleradiology that is being used is outlined. The way that the PACS are being used in the hospitals is instructive. The three most used areas for radiology in the hospital are the wards including the ICU wards, the emergency room, and the orthopedics clinic. The examinations are mostly CR images with 20 percent to 30 percent of the examinations being CT, MR, and ultrasound exams. The PACS are being used to realize improved productivity for radiology and for the clinicians. For radiology the same staff is being used for 30 to 50 percent more workload. For the clinicians 10 to 20 percent of their time is being saved in dealing with radiology images. The improved productivity stems from the high performance of the PACS that has been designed and installed. Images are available on any workstation in the hospital within less than two seconds, even during the busiest hour of the day. The examination management functions to restrict the attention of any one user to the examinations that are of interest. The examination management organizes the workflow through the radiology department and the hospital, improving the service of the radiology department by reducing the time until the information from a radiology examination is available. The remaining weak link in the PACS system is transcription. The examination can be acquired, read, an the report dictated in much less than ten minutes. The transcription of the dictated reports can take from a few hours to a few days. The addition of automatic transcription services will remove this weak link.

  18. Design and performance of a new continuous-flow sample-introduction system for flame infrared-emission spectrometry: Applications in process analysis, flow injection analysis, and ion-exchange high-performance liquid chromatography.

    PubMed

    Lam, C K; Zhang, Y; Busch, M A; Busch, K W

    1993-06-01

    A new sample introduction system for the analysis of continuously flowing liquid streams by flame infrared-emission (FIRE) spectrometry has been developed. The system uses a specially designed purge cell to strip dissolved CO(2) from solution into a hydrogen gas stream that serves as the fuel for a hydrogen/air flame. Vibrationally excited CO(2) molecules present in the flame are monitored with a simple infrared filter (4.4 mum) photometer. The new system can be used to introduce analytes as a continuous liquid stream (process analysis mode) or on a discrete basis by sample injection (flow injection analysis mode). The key to the success of the method is the new purge-cell design. The small internal volume of the cell minimizes problems associated with purge-cell clean-out and produces sharp, reproducible signals. Spent analytical solution is continuously drained from the cell, making cell disconnection and cleaning between samples unnecessary. Under the conditions employed in this study, samples could be analyzed at a maximum rate of approximately 60/h. The new sample introduction system was successfully tested in both a process analysis- and a flow injection analysis mode for the determination of total inorganic carbon in Waco tap water. For the first time, flame infrared-emission spectrometry was successfully extended to non-volatile organic compounds by using chemical pretreatment with peroxydisulfate in the presence of silver ion to convert the analytes into dissolved carbon dioxide, prior to purging and detection by the FIRE radiometer. A test of the peroxydisulfate/Ag(+) reaction using six organic acids and five sugars indicated that all 11 compounds were oxidized to nearly the same extent. Finally, the new sample introduction system was used in conjunction with a simple filter FIRE radiometer as a detection system in ion-exchange high-performance liquid chromatography. Ion-exchange chromatograms are shown for two aqueous mixtures, one containing six organic

  19. Separation, concentration and determination of chloramphenicol in environment and food using an ionic liquid/salt aqueous two-phase flotation system coupled with high-performance liquid chromatography.

    PubMed

    Han, Juan; Wang, Yun; Yu, Cuilan; Li, Chunxiang; Yan, Yongsheng; Liu, Yan; Wang, Liang

    2011-01-31

    Ionic liquid-salt aqueous two-phase flotation (ILATPF) is a novel, green, non-toxic and sensitive samples pretreatment technique. ILATPF coupled with high-performance liquid chromatography (HPLC) was developed for the analysis of chloramphenicol, which combines ionic liquid aqueous two-phase system (ILATPS) based on imidazolium ionic liquid (1-butyl-3-methylimidazolium chloride, [C(4)mim]Cl) and inorganic salt (K(2)HPO(4)) with solvent sublation. In ILATPF systems, phase behaviors of the ILATPF were studied for different types of ionic liquids and salts. The sublation efficiency of chloramphenicol in [C(4)mim]Cl-K(2)HPO(4) ILATPF was influenced by the types of salts, concentration of K(2)HPO(4) in aqueous solution, solution pH, nitrogen flow rate, sublation time and the amount of [C(4)mim]Cl. Under the optimum conditions, the average sublation efficiency is up to 98.5%. The mechanism of ILATPF contains two principal processes. One is the mechanism of IL-salt ILATPS formation, the other is solvent sublation. This method was practical when applied to the analysis of chloramphenicol in lake water, feed water, milk, and honey samples with the linear range of 0.5-500 ng mL(-1). The method yielded limit of detection (LOD) of 0.1 ng mL(-1) and limit of quantification (LOQ) of 0.3 ng mL(-1). The recovery of CAP was 97.1-101.9% from aqueous samples of environmental and food samples by the proposed method. Compared with liquid-liquid extraction, solvent sublation and ionic liquid aqueous two-phase extraction, ILATPF can not only separate and concentrate chloramphenicol with high sublation efficiency, but also efficiently reduce the wastage of IL. This novel technique is much simpler and more environmentally friendly and is suggested to have important applications for the concentration and separation of other small biomolecules. PMID:21168562

  20. Team Development for High Performance Management.

    ERIC Educational Resources Information Center

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  1. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  2. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  3. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  4. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  5. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  6. Co-design for High Performance Computing

    NASA Astrophysics Data System (ADS)

    Rodrigues, Arun; Dosanjh, Sudip; Hemmert, Scott

    2010-09-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  7. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  8. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  9. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  10. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  11. National Best Practices Manual for Building High Performance Schools

    ERIC Educational Resources Information Center

    US Department of Energy, 2007

    2007-01-01

    The U.S. Department of Energy's Rebuild America EnergySmart Schools program provides school boards, administrators, and design staff with guidance to help make informed decisions about energy and environmental issues important to school systems and communities. "The National Best Practices Manual for Building High Performance Schools" is a part of…

  12. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  13. High Performance Lasers and LEDs for Optical Communication

    NASA Astrophysics Data System (ADS)

    Nelson, R. J.

    1987-01-01

    High performance 1.3 um lasers and LEDs have been developed for optical communications systems. The lasers exhibit low threshold currents, excellent high speed and spectral characteristics, and high reliability. The surface emitting LEDs provide launched powers greater than -15 dBm into 62.5 um core fiber with rise and fall times suitable for operation to 220 Mb/s.

  14. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Diagnostic and remedial methods concerning rotordynamic instability problems in high performance turbomachinery are discussed. Instabilities due to seal forces and work-fluid forces are identified along with those induced by rotor bearing systems. Several methods of rotordynamic control are described including active feedback methods, the use of elastometric elements, and the use of hydrodynamic journal bearings and supports.

  15. Extraction and determination of chloramphenicol in feed water, milk, and honey samples using an ionic liquid/sodium citrate aqueous two-phase system coupled with high-performance liquid chromatography.

    PubMed

    Han, Juan; Wang, Yun; Yu, Cui-lan; Yan, Yong-sheng; Xie, Xue-qiao

    2011-01-01

    A green, simple, non-toxic, and sensitive sample pretreatment procedure coupled with high-performance liquid chromatography (HPLC) was developed for the analysis of chloramphenicol (CAP) that exploits an aqueous two-phase system based on imidazolium ionic liquid (1-butyl-3-methylimidazolium tetrafluoroborate, [Bmim]BF(4)) and organic salt (Na(3)C(6)H(5)O(7)) using a liquid-liquid extraction technique. The influence factors on partition behaviors of CAP were studied, including the type and amount of salts, the pH value, the volume of [Bmim]BF(4), and the extraction temperature. Extraction efficiency of the CAP was found to increase with increasing temperature and the volume of [Bmim]BF(4). Thermodynamic studies indicated that hydrophobic interactions were the main driving force, although electrostatic interactions and salting-out effects were also important for the transfer of the CAP. Under the optimal conditions, 90.1% of the CAP could be extracted into the ionic liquid-rich phase in a single-step extraction. This method was practical when applied to the analysis of CAP in feed water, milk, and honey samples with a linear range of 2~1,000 ng mL(-1). The method yielded a limit of detection of 0.3 ng mL(-1) and a limit of quantification of 1.0 ng mL(-1). The recovery of CAP was 90.4-102.7% from aqueous samples of real feed water, milk, and honey samples by the proposed method. This novel process is much simpler and more environmentally friendly and is suggested to have important applications for the separation of antibiotics. PMID:21063686

  16. Single-step electrotransfer of reverse-stained proteins from sodium dodecyl sulfate-polyacrylamide gel onto reversed-phase minicartridge and subsequent desalting and elution with a conventional high-performance liquid chromatography gradient system for analysis.

    PubMed

    Fernandez-Patron, C; Madrazo, J; Hardy, E; Mendez, E; Frank, R; Castellanos-Serra, L

    1995-06-01

    Isolation of proteins from polyacrylamide electrophoresis gels by a novel combination of techniques is described. A given protein band from a reverse stained (imidazol-sodium dodecyl sulfate--zinc salts) gel can be directly electrotransferred onto a reversed-phase chromatographic support, packed in a self-made minicartridge (2 mm in thickness, 8 mm in internal diameter, made of inert polymeric materials). The minicartridge is then connected to a high-performance liquid chromatography system and the electrotransferred protein eluted by applying an acetonitrile gradient. Proteins elute in a small volume ( < 700 microL) of high-purity volatile solvents (water, trifluoroacetic acid, acetonitrile) and are free of contaminants (gel contaminants, salts, etc). Electrotransferred proteins were efficiently retained, e.g., up to 90% for radioiodinated alpha-lactalbumin, by the octadecyl matrix, and their recovery on elution from the minicartridge was in the range typical for this type of chromatographic support, e.g., 73% for alpha-lactalbumin. The technique was successfully applied to a variety of proteins in the molecular mass range 6-68 kDa, and with amounts between 50 and 2000 pmol. The good mechanical and chemical stability of the developed minicartridges, during electrotransfer and chromatography, allowed their repeated use. This new technique permitted a single-step separation of two proteins unresolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis due to their different elution from the reversed-phase support. The isolated proteins were amenable to analysis by N-terminal sequencing, enzymic digestion and mass spectrometry of their proteolytic fragments. Chromatographic elution of proteins from the reversed-phase mini-cartridge was apparently independent of the specific loading mode employed, i.e., loading by conventional loop injection or by electrotransfer. PMID:7498136

  17. Synchronized separation, concentration and determination of trace sulfadiazine and sulfamethazine in food and environment by using polyoxyethylene lauryl ether-salt aqueous two-phase system coupled to high-performance liquid chromatography.

    PubMed

    Lu, Yang; Cong, Biao; Tan, Zhenjiang; Yan, Yongsheng

    2016-11-01

    Polyoxyethylene lauryl ether (POELE10)-Na2C4H4O6 aqueous two-phase extraction system (ATPES) is a novel and green pretreatment technique to trace samples. ATPES coupled with high-performance liquid chromatography (HPLC) is used to analyze synchronously sulfadiazine (SDZ) and sulfamethazine (SMT) in animal by-products (i.e., egg and milk) and environmental water sample. It was found that the extraction efficiency (E%) and the enrichment factor (F) of SDZ and SMT were influenced by the types of salts, the concentration of salt, the concentration of POELE10 and the temperature. The orthogonal experimental design (OED) was adopted in the multi-factor experiment to determine the optimized conditions. The final optimal condition was as following: the concentration of POELE10 is 0.027gmL(-1), the concentration of Na2C4H4O6 is 0.180gmL(-1) and the temperature is 35°C. This POELE10-Na2C4H4O6 ATPS was applied to separate and enrich SDZ and SMT in real samples (i.e., water, egg and milk) under the optimal conditions, and it was found that the recovery of SDZ and SMT was 96.20-99.52% with RSD of 0.35-3.41%. The limit of detection (LOD) of this method for the SDZ and SMT in spiked samples was 2.52-3.64pgmL(-1), and the limit of quantitation (LOQ) of this method for the SDZ and SMT in spiked samples was 8.41-12.15pgmL(-1).

  18. Synchronized separation, concentration and determination of trace sulfadiazine and sulfamethazine in food and environment by using polyoxyethylene lauryl ether-salt aqueous two-phase system coupled to high-performance liquid chromatography.

    PubMed

    Lu, Yang; Cong, Biao; Tan, Zhenjiang; Yan, Yongsheng

    2016-11-01

    Polyoxyethylene lauryl ether (POELE10)-Na2C4H4O6 aqueous two-phase extraction system (ATPES) is a novel and green pretreatment technique to trace samples. ATPES coupled with high-performance liquid chromatography (HPLC) is used to analyze synchronously sulfadiazine (SDZ) and sulfamethazine (SMT) in animal by-products (i.e., egg and milk) and environmental water sample. It was found that the extraction efficiency (E%) and the enrichment factor (F) of SDZ and SMT were influenced by the types of salts, the concentration of salt, the concentration of POELE10 and the temperature. The orthogonal experimental design (OED) was adopted in the multi-factor experiment to determine the optimized conditions. The final optimal condition was as following: the concentration of POELE10 is 0.027gmL(-1), the concentration of Na2C4H4O6 is 0.180gmL(-1) and the temperature is 35°C. This POELE10-Na2C4H4O6 ATPS was applied to separate and enrich SDZ and SMT in real samples (i.e., water, egg and milk) under the optimal conditions, and it was found that the recovery of SDZ and SMT was 96.20-99.52% with RSD of 0.35-3.41%. The limit of detection (LOD) of this method for the SDZ and SMT in spiked samples was 2.52-3.64pgmL(-1), and the limit of quantitation (LOQ) of this method for the SDZ and SMT in spiked samples was 8.41-12.15pgmL(-1). PMID:27434421

  19. Employment of High-Performance Thin-Layer Chromatography for the Quantification of Oleuropein in Olive Leaves and the Selection of a Suitable Solvent System for Its Isolation with Centrifugal Partition Chromatography.

    PubMed

    Boka, Vasiliki-Ioanna; Argyropoulou, Aikaterini; Gikas, Evangelos; Angelis, Apostolis; Aligiannis, Nektarios; Skaltsounis, Alexios-Leandros

    2015-11-01

    A high-performance thin-layer chromatographic methodology was developed and validated for the isolation and quantitative determination of oleuropein in two extracts of Olea europaea leaves. OLE_A was a crude acetone extract, while OLE_AA was its defatted residue. Initially, high-performance thin-layer chromatography was employed for the purification process of oleuropein with fast centrifugal partition chromatography, replacing high-performance liquid-chromatography, in the stage of the determination of the distribution coefficient and the retention volume. A densitometric method was developed for the determination of the distribution coefficients, KC = CS/CM. The total concentrations of the target compound in the stationary phase (CS) and in the mobile phase (CM) were calculated by the area measured in the high-performance thin-layer chromatogram. The estimated Kc was also used for the calculation of the retention volume, VR, with a chromatographic retention equation. The obtained data were successfully applied for the purification of oleuropein and the experimental results confirmed the theoretical predictions, indicating that high-performance thin-layer chromatography could be an important counterpart in the phytochemical study of natural products. The isolated oleuropein (purity > 95%) was subsequently used for the estimation of its content in each extract with a simple, sensitive and accurate high-performance thin-layer chromatography method. The best fit calibration curve from 1.0 µg/track to 6.0 µg/track of oleuropein was polynomial and the quantification was achieved by UV detection at λ 240 nm. The method was validated giving rise to an efficient and high-throughput procedure, with the relative standard deviation % of repeatability and intermediate precision not exceeding 4.9% and accuracy between 92% and 98% (recovery rates). Moreover, the method was validated for robustness, limit of quantitation, and limit of detection. The amount of oleuropein for

  20. High Performance Diesel Fueled Cabin Heater

    SciTech Connect

    Butcher, Tom

    2001-08-05

    Recent DOE-OHVT studies show that diesel emissions and fuel consumption can be greatly reduced at truck stops by switching from engine idle to auxiliary-fired heaters. Brookhaven National Laboratory (BNL) has studied high performance diesel burner designs that address the shortcomings of current low fire-rate burners. Initial test results suggest a real opportunity for the development of a truly advanced truck heating system. The BNL approach is to use a low pressure, air-atomized burner derived form burner designs used commonly in gas turbine combustors. This paper reviews the design and test results of the BNL diesel fueled cabin heater. The burner design is covered by U.S. Patent 6,102,687 and was issued to U.S. DOE on August 15, 2000.The development of several novel oil burner applications based on low-pressure air atomization is described. The atomizer used is a pre-filming, air blast nozzle of the type commonly used in gas turbine combustion. The air pressure used can b e as low as 1300 Pa and such pressure can be easily achieved with a fan. Advantages over conventional, pressure-atomized nozzles include ability to operate at low input rates without very small passages and much lower fuel pressure requirements. At very low firing rates the small passage sizes in pressure swirl nozzles lead to poor reliability and this factor has practically constrained these burners to firing rates over 14 kW. Air atomization can be used very effectively at low firing rates to overcome this concern. However, many air atomizer designs require pressures that can be achieved only with a compressor, greatly complicating the burner package and increasing cost. The work described in this paper has been aimed at the practical adaptation of low-pressure air atomization to low input oil burners. The objective of this work is the development of burners that can achieve the benefits of air atomization with air pressures practically achievable with a simple burner fan.

  1. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  2. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  3. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  4. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  5. Implementing High Performance Remote Method Invocation in CCA

    SciTech Connect

    Yin, Jian; Agarwal, Khushbu; Krishnan, Manoj Kumar; Chavarría-Miranda, Daniel; Gorton, Ian; Epperly, Thomas G.

    2011-09-30

    We report our effort in engineering a high performance remote method invocation (RMI) mechanism for the Common Component Architecture (CCA). This mechanism provides a highly efficient and easy-to-use mechanism for distributed computing in CCA, enabling CCA applications to effectively leverage parallel systems to accelerate computations. This work is built on the previous work of Babel RMI. Babel is a high performance language interoperability tool that is used in CCA for scientific application writers to share, reuse, and compose applications from software components written in different programming languages. Babel provides a transparent and flexible RMI framework for distributed computing. However, the existing Babel RMI implementation is built on top of TCP and does not provide the level of performance required to distribute fine-grained tasks. We observed that the main reason the TCP based RMI does not perform well is because it does not utilize the high performance interconnect hardware on a cluster efficiently. We have implemented a high performance RMI protocol, HPCRMI. HPCRMI achieves low latency by building on top of a low-level portable communication library, Aggregated Remote Message Copy Interface (ARMCI), and minimizing communication for each RMI call. Our design allows a RMI operation to be completed by only two RDMA operations. We also aggressively optimize our system to reduce copying. In this paper, we discuss the design and our experimental evaluation of this protocol. Our experimental results show that our protocol can improve RMI performance by an order of magnitude.

  6. Micro-polarimeter for high performance liquid chromatography

    DOEpatents

    Yeung, Edward E.; Steenhoek, Larry E.; Woodruff, Steven D.; Kuo, Jeng-Chung

    1985-01-01

    A micro-polarimeter interfaced with a system for high performance liquid chromatography, for quantitatively analyzing micro and trace amounts of optically active organic molecules, particularly carbohydrates. A flow cell with a narrow bore is connected to a high performance liquid chromatography system. Thin, low birefringence cell windows cover opposite ends of the bore. A focused and polarized laser beam is directed along the longitudinal axis of the bore as an eluent containing the organic molecules is pumped through the cell. The beam is modulated by air gap Faraday rotators for phase sensitive detection to enhance the signal to noise ratio. An analyzer records the beams's direction of polarization after it passes through the cell. Calibration of the liquid chromatography system allows determination of the quantity of organic molecules present from a determination of the degree to which the polarized beam is rotated when it passes through the eluent.

  7. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  8. Efficacy of a vaporization-resection of the prostate median lobe enlargement and vaporization of the prostate lateral lobe for benign prostatic hyperplasia using a 120-W GreenLight high-performance system laser: the effect on storage symptoms.

    PubMed

    Kim, Kang Sup; Choi, Sae Woong; Bae, Woong Jin; Kim, Su Jin; Cho, Hyuk Jin; Hong, Sung-Hoo; Lee, Ji Youl; Hwang, Tae-Kon; Kim, Sae Woong

    2015-05-01

    GreenLight laser photoselective vaporization of the prostate (PVP) was established as a minimally invasive procedure to treat patients with benign prostatic hyperplasia (BPH). However, it may be difficult to achieve adequate tissue removal from a large prostate, particularly those with an enlarged median lobe. The purpose of this study was to investigate the feasibility and clinical effect of a 120-W GreenLight high-performance system laser vaporization-resection for an enlarged prostate median lobe compared with those of only vaporization. A total of 126 patients from January 2010 to January 2014 had an enlarged prostate median lobe and were included in this study. Ninety-six patients underwent vaporization only (VP group), and 30 patients underwent vaporization-resection for an enlarged median lobe (VR group). The clinical outcomes were International Prostate Symptoms Score (IPSS), quality of life (QOL), maximum flow rate (Q max), and post-void residual urine volume (PVR) assessed at 1, 3, 6, and 12 months postoperatively between the two groups. The parameters were not significantly different preoperatively between the two groups, except for PVR. Operative time and laser time were shorter in the VR group than those in the VP group. (74.1 vs. 61.9 min and 46.7 vs. 37.8 min; P = 0.020 and 0.013, respectively) and used less energy (218.2 vs. 171.8 kJ, P = 0.025). Improved IPSS values, increased Q max, and a reduced PVR were seen in the two groups. In particular, improved storage IPSS values were higher at 1 and 3 months in the VR group than those in the VP group (P = 0.030 and 0.022, respectively). No significant complications were detected in either group. Median lobe tissue vaporization-resection was complete, and good voiding results were achieved. Although changes in urinary symptoms were similar between patients who received the two techniques, shorter operating time and lower energy were superior with the vaporization-resection technique. In

  9. Cray XMT Brings New Energy to High-Performance Computing

    SciTech Connect

    Chavarría-Miranda, Daniel; Gracio, Deborah K.; Marquez, Andres; Nieplocha, Jaroslaw; Scherrer, Chad; Sofia, Heidi J.

    2008-09-30

    The ability to solve our nation’s most challenging problems—whether it’s cleaning up the environment, finding alternative forms of energy or improving public health and safety—requires new scientific discoveries. High performance experimental and computational technologies from the past decade are helping to accelerate these scientific discoveries, but they introduce challenges of their own. The vastly increasing volumes and complexities of experimental and computational data pose significant challenges to traditional high-performance computing (HPC) platforms as terabytes to petabytes of data must be processed and analyzed. And the growing complexity of computer models that incorporate dynamic multiscale and multiphysics phenomena place enormous demands on high-performance computer architectures. Just as these new challenges are arising, the computer architecture world is experiencing a renaissance of innovation. The continuing march of Moore’s law has provided the opportunity to put more functionality on a chip, enabling the achievement of performance in new ways. Power limitations, however, will severely limit future growth in clock rates. The challenge will be to obtain greater utilization via some form of on-chip parallelism, but the complexities of emerging applications will require significant innovation in high-performance architectures. The Cray XMT, the successor to the Tera/Cray MTA, provides an alternative platform for addressing computations that stymie current HPC systems, holding the potential to substantially accelerate data analysis and predictive analytics for many complex challenges in energy, national security and fundamental science that traditional computing cannot do.

  10. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  11. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  12. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  13. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  14. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  15. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  16. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  17. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  18. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  19. High-performance vertical organic transistors.

    PubMed

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. PMID:23637074

  20. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  1. Arteriopathy in the high-performance athlete.

    PubMed

    Takach, Thomas J; Kane, Peter N; Madjarov, Jeko M; Holleman, Jeremiah H; Nussbaum, Tzvi; Robicsek, Francis; Roush, Timothy S

    2006-01-01

    Pain occurs frequently in high-performance athletes and is most often due to musculoskeletal injury or strain. However, athletes who participate in sports that require highly frequent, repetitive limb motion can also experience pain from an underlying arteriopathy, which causes exercise-induced ischemia. We reviewed the clinical records and follow-up care of 3 high-performance athletes (mean age, 29.3 yr; range, 16-47 yr) who were admitted consecutively to our institution from January 2002 through May 2003, each with a diagnosis of limb ischemia due to arteriopathy. The study group comprised 3 males: 2 active in competitive baseball (ages, 16 and 19 yr) and a cyclist (age, 47 yr). Provocative testing and radiologic evaluation established the diagnoses. Treatment goals included targeted resection of compressive structures, arterial reconstruction to eliminate stenosis and possible emboli, and improvement of distal perfusion. Our successful reconstructive techniques included thoracic outlet decompression and interpositional bypass of the subclavian artery in the 16-year-old patient, pectoralis muscle and tendon decompression to relieve compression of the axillary artery in the 19-year-old, and patch angioplasty for endofibrosis affecting the external iliac artery in the 47-year-old. Each patient was asymptomatic on follow-up and had resumed participation in competitive athletics. The recognition and anatomic definition of an arteriopathy that produces exercise-induced ischemia enables the application of precise therapy that can produce a symptom-free outcome and the ability to resume competitive athletics.

  2. Micromachined high-performance RF passives in CMOS substrate

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Ni, Zao; Gu, Lei; Wu, Zhengzheng; Yang, Chen

    2016-11-01

    This review systematically addresses the micromachining technologies used for the fabrication of high-performance radio-frequency (RF) passives that can be integrated into low-cost complementary metal-oxide semiconductor (CMOS)-grade (i.e. low-resistivity) silicon wafers. With the development of various kinds of post-CMOS-compatible microelectromechanical systems (MEMS) processes, 3D structural inductors/transformers, variable capacitors, tunable resonators and band-pass/low-pass filters can be compatibly integrated into active integrated circuits to form monolithic RF system-on-chips. By using MEMS processes, including substrate modifying/suspending and LIGA-like metal electroplating, both the highly lossy substrate effect and the resistive loss can be largely eliminated and depressed, thereby meeting the high-performance requirements of telecommunication applications.

  3. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2006-05-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  4. Inorganic nanostructured materials for high performance electrochemical supercapacitors.

    PubMed

    Liu, Sheng; Sun, Shouheng; You, Xiao-Zeng

    2014-02-21

    Electrochemical supercapacitors (ES) are a well-known energy storage system that has high power density, long life-cycle and fast charge-discharge kinetics. Nanostructured materials are a new generation of electrode materials with large surface area and short transport/diffusion path for ions and electrons to achieve high specific capacitance in ES. This mini review highlights recent developments of inorganic nanostructure materials, including carbon nanomaterials, metal oxide nanoparticles, and metal oxide nanowires/nanotubes, for high performance ES applications.

  5. Progress Toward Demonstrating a High Performance Optical Tape Recording Technology

    NASA Technical Reports Server (NTRS)

    Oakley, W. S.

    1996-01-01

    This paper discusses the technology developments achieved during the first year of a program to develop a high performance digital optical tape recording device using a solid state, diode pumped, frequency doubled green laser source. The goal is to demonstrate, within two years, useful read/write data transfer rates to at least 100 megabytes per second and a user capacity of up to one terabyte per cartridge implemented in a system using a '3480' style mono-reel tape cartridge.

  6. Stability and control of maneuvering high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Berry, P. W.

    1977-01-01

    The stability and control of a high-performance aircraft was analyzed, and a design methodology for a departure prevention stability augmentation system (DPSAS) was developed. A general linear aircraft model was derived which includes maneuvering flight effects and trim calculation procedures for investigating highly dynamic trajectories. The stability and control analysis systematically explored the effects of flight condition and angular motion, as well as the stability of typical air combat trajectories. The effects of configuration variation also were examined.

  7. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  8. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  9. [High-performance society and doping].

    PubMed

    Gallien, C L

    2002-09-01

    Doping is not limited to high-level athletes. Likewise it is not limited to the field of sports activities. The doping phenomenon observed in sports actually reveals an underlying question concerning the notion of sports itself, and more widely, the society's conception of sports. In a high-performance society, which is also a high-risk society, doping behavior is observed in a large number of persons who may or may not participate in sports activities. The motivation is the search for individual success or profit. The fight against doping must therefore focus on individual responsibility and prevention in order to preserve athlete's health and maintain the ethical and educational value of sports activities.

  10. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  11. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  12. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  13. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  14. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  15. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  16. Some design considerations for high-performance infrared imaging seeker

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Huang, Jianxiong

    2015-10-01

    In recent years, precision guided weapons play more and more important role in modern war. The development and applications of infrared imaging guidance technology have been paid more and more attention. And with the increasing of the complexity of mission and environment, precision guided weapons make stricter demand for infrared imaging seeker. The demands for infrared imaging seeker include: high detection sensitivity, large dynamic range, having better target recognition capability, having better anti-jamming capability and better environment adaptability. To meet the strict demand of weapon system, several important issues should be considered in high-performance infrared imaging seeker design. The mission, targets, environment of infrared imaging guided missile must be regarded. The tradeoff among performance goal, design parameters, infrared technology constraints and missile constraints should be considered. The optimized application of IRFPA and ATR in complicated environment should be concerned. In this paper, some design considerations for high-performance infrared imaging seeker were discussed.

  17. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  18. On implementing MPI-IO portably and with high performance.

    SciTech Connect

    Thakur, R.; Gropp, W.; Lusk, E.

    1998-11-30

    We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high performance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I/O functions (open, seek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and performance. We instead advocate an implementation approach that combines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I/O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, noncontiguous accesses, collective I/O, asynchronous I/O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, file preallocation, and some miscellaneous features. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be designed to better support MPI-IO. We provide a list of features desired from a file system that would help in implementing MPI-IO correctly and with high performance.

  19. High performance APCS conceptual design and evaluation scoping study

    SciTech Connect

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  20. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  1. Designing High-Performance Schools: A Practical Guide to Organizational Reengineering.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    This book offers a step-by-step, systematic process for designing high-performance learning organizations. The process helps administrators develop proposals for redesigning school districts that are tailored to the district's environment, work system, and social system. Chapter 1 describes the characteristics of high-performing organizations, and…

  2. Trends in high-performance computing for engineering calculations.

    PubMed

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers.

  3. A low cost alternative to high performance PCM bit synchronizers

    NASA Technical Reports Server (NTRS)

    Deshong, Bruce

    1993-01-01

    The Code Converter/Clock Regenerator (CCCR) provides a low-cost alternative to high-performance Pulse Code Modulation (PCM) bit synchronizers in environments with a large Signal-to-Noise Ratio (SNR). In many applications, the CCCR can be used in place of PCM bit synchronizers at about one fifth the cost. The CCCR operates at rates from 10 bps to 2.5 Mbps and performs PCM code conversion and clock regeneration. The CCCR has been integrated into a stand-alone system configurable from one to six channels and has also been designed for use in VMEbus compatible systems.

  4. High performance protection circuit for power electronics applications

    SciTech Connect

    Tudoran, Cristian D. Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  5. High performance protection circuit for power electronics applications

    NASA Astrophysics Data System (ADS)

    Tudoran, Cristian D.; Dǎdârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan

    2015-12-01

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a "sensor" or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  6. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  7. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  8. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  9. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  10. High performance vapour-cell frequency standards

    NASA Astrophysics Data System (ADS)

    Gharavipour, M.; Affolderbach, C.; Kang, S.; Bandi, T.; Gruet, F.; Pellaton, M.; Mileti, G.

    2016-06-01

    We report our investigations on a compact high-performance rubidium (Rb) vapour-cell clock based on microwave-optical double-resonance (DR). These studies are done in both DR continuous-wave (CW) and Ramsey schemes using the same Physics Package (PP), with the same Rb vapour cell and a magnetron-type cavity with only 45 cm3 external volume. In the CW-DR scheme, we demonstrate a DR signal with a contrast of 26% and a linewidth of 334 Hz; in Ramsey-DR mode Ramsey signals with higher contrast up to 35% and a linewidth of 160 Hz have been demonstrated. Short-term stabilities of 1.4×10-13 τ-1/2 and 2.4×10-13 τ-1/2 are measured for CW-DR and Ramsey-DR schemes, respectively. In the Ramsey-DR operation, thanks to the separation of light and microwave interactions in time, the light-shift effect has been suppressed which allows improving the long-term clock stability as compared to CW-DR operation. Implementations in miniature atomic clocks are considered.

  11. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  12. Low-Cost High-Performance MRI.

    PubMed

    Sarracanie, Mathieu; LaPierre, Cristen D; Salameh, Najat; Waddington, David E J; Witzel, Thomas; Rosen, Matthew S

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm(3) imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  13. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  14. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  15. Towards high performance inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Gong, Xiong

    2013-03-01

    Bulk heterojunction polymer solar cells that can be fabricated by solution processing techniques are under intense investigation in both academic institutions and industrial companies because of their potential to enable mass production of flexible and cost-effective alternative to silicon-based electronics. Despite the envisioned advantages and recent technology advances, so far the performance of polymer solar cells is still inferior to inorganic counterparts in terms of the efficiency and stability. There are many factors limiting the performance of polymer solar cells. Among them, the optical and electronic properties of materials in the active layer, device architecture and elimination of PEDOT:PSS are the most determining factors in the overall performance of polymer solar cells. In this presentation, I will present how we approach high performance of polymer solar cells. For example, by developing novel materials, fabrication polymer photovoltaic cells with an inverted device structure and elimination of PEDOT:PSS, we were able to observe over 8.4% power conversion efficiency from inverted polymer solar cells.

  16. An integrated high performance Fastbus slave interface

    SciTech Connect

    Christiansen, J.; Ljuslin, C. )

    1993-08-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip.

  17. High performance composites with active stiffness control.

    PubMed

    Tridech, Charnwit; Maples, Henry A; Robinson, Paul; Bismarck, Alexander

    2013-09-25

    High performance carbon fiber reinforced composites with controllable stiffness could revolutionize the use of composite materials in structural applications. Here we describe a structural material, which has a stiffness that can be actively controlled on demand. Such a material could have applications in morphing wings or deployable structures. A carbon fiber reinforced-epoxy composite is described that can undergo an 88% reduction in flexural stiffness at elevated temperatures and fully recover when cooled, with no discernible damage or loss in properties. Once the stiffness has been reduced, the required deformations can be achieved at much lower actuation forces. For this proof-of-concept study a thin polyacrylamide (PAAm) layer was electrocoated onto carbon fibers that were then embedded into an epoxy matrix via resin infusion. Heating the PAAm coating above its glass transition temperature caused it to soften and allowed the fibers to slide within the matrix. To produce the stiffness change the carbon fibers were used as resistance heating elements by passing a current through them. When the PAAm coating had softened, the ability of the interphase to transfer load to the fibers was significantly reduced, greatly lowering the flexural stiffness of the composite. By changing the moisture content in PAAm fiber coating, the temperature at which the PAAm softens and the composites undergo a reduction in stiffness can be tuned. PMID:23978266

  18. Fabricating high performance lithium-ion batteries using bionanotechnology.

    PubMed

    Zhang, Xudong; Hou, Yukun; He, Wen; Yang, Guihua; Cui, Jingjie; Liu, Shikun; Song, Xin; Huang, Zhen

    2015-02-28

    Designing, fabricating, and integrating nanomaterials are key to transferring nanoscale science into applicable nanotechnology. Many nanomaterials including amorphous and crystal structures are synthesized via biomineralization in biological systems. Amongst various techniques, bionanotechnology is an effective strategy to manufacture a variety of sophisticated inorganic nanomaterials with precise control over their chemical composition, crystal structure, and shape by means of genetic engineering and natural bioassemblies. This provides opportunities to use renewable natural resources to develop high performance lithium-ion batteries (LIBs). For LIBs, reducing the sizes and dimensions of electrode materials can boost Li(+) ion and electron transfer in nanostructured electrodes. Recently, bionanotechnology has attracted great interest as a novel tool and approach, and a number of renewable biotemplate-based nanomaterials have been fabricated and used in LIBs. In this article, recent advances and mechanism studies in using bionanotechnology for high performance LIBs studies are thoroughly reviewed, covering two technical routes: (1) Designing and synthesizing composite cathodes, e.g. LiFePO4/C, Li3V2(PO4)3/C and LiMn2O4/C; and (2) designing and synthesizing composite anodes, e.g. NiO/C, Co3O4/C, MnO/C, α-Fe2O3 and nano-Si. This review will hopefully stimulate more extensive and insightful studies on using bionanotechnology for developing high-performance LIBs.

  19. Fabricating high performance lithium-ion batteries using bionanotechnology

    NASA Astrophysics Data System (ADS)

    Zhang, Xudong; Hou, Yukun; He, Wen; Yang, Guihua; Cui, Jingjie; Liu, Shikun; Song, Xin; Huang, Zhen

    2015-02-01

    Designing, fabricating, and integrating nanomaterials are key to transferring nanoscale science into applicable nanotechnology. Many nanomaterials including amorphous and crystal structures are synthesized via biomineralization in biological systems. Amongst various techniques, bionanotechnology is an effective strategy to manufacture a variety of sophisticated inorganic nanomaterials with precise control over their chemical composition, crystal structure, and shape by means of genetic engineering and natural bioassemblies. This provides opportunities to use renewable natural resources to develop high performance lithium-ion batteries (LIBs). For LIBs, reducing the sizes and dimensions of electrode materials can boost Li+ ion and electron transfer in nanostructured electrodes. Recently, bionanotechnology has attracted great interest as a novel tool and approach, and a number of renewable biotemplate-based nanomaterials have been fabricated and used in LIBs. In this article, recent advances and mechanism studies in using bionanotechnology for high performance LIBs studies are thoroughly reviewed, covering two technical routes: (1) Designing and synthesizing composite cathodes, e.g. LiFePO4/C, Li3V2(PO4)3/C and LiMn2O4/C; and (2) designing and synthesizing composite anodes, e.g. NiO/C, Co3O4/C, MnO/C, α-Fe2O3 and nano-Si. This review will hopefully stimulate more extensive and insightful studies on using bionanotechnology for developing high-performance LIBs.

  20. Fabricating high performance lithium-ion batteries using bionanotechnology.

    PubMed

    Zhang, Xudong; Hou, Yukun; He, Wen; Yang, Guihua; Cui, Jingjie; Liu, Shikun; Song, Xin; Huang, Zhen

    2015-02-28

    Designing, fabricating, and integrating nanomaterials are key to transferring nanoscale science into applicable nanotechnology. Many nanomaterials including amorphous and crystal structures are synthesized via biomineralization in biological systems. Amongst various techniques, bionanotechnology is an effective strategy to manufacture a variety of sophisticated inorganic nanomaterials with precise control over their chemical composition, crystal structure, and shape by means of genetic engineering and natural bioassemblies. This provides opportunities to use renewable natural resources to develop high performance lithium-ion batteries (LIBs). For LIBs, reducing the sizes and dimensions of electrode materials can boost Li(+) ion and electron transfer in nanostructured electrodes. Recently, bionanotechnology has attracted great interest as a novel tool and approach, and a number of renewable biotemplate-based nanomaterials have been fabricated and used in LIBs. In this article, recent advances and mechanism studies in using bionanotechnology for high performance LIBs studies are thoroughly reviewed, covering two technical routes: (1) Designing and synthesizing composite cathodes, e.g. LiFePO4/C, Li3V2(PO4)3/C and LiMn2O4/C; and (2) designing and synthesizing composite anodes, e.g. NiO/C, Co3O4/C, MnO/C, α-Fe2O3 and nano-Si. This review will hopefully stimulate more extensive and insightful studies on using bionanotechnology for developing high-performance LIBs. PMID:25640923

  1. High Performance Walls in Hot-Dry Climates

    SciTech Connect

    Hoeschele, Marc; Springer, David; Dakin, Bill; German, Alea

    2015-01-01

    High performance walls represent a high priority measure for moving the next generation of new homes to the Zero Net Energy performance level. The primary goal in improving wall thermal performance revolves around increasing the wall framing from 2x4 to 2x6, adding more cavity and exterior rigid insulation, achieving insulation installation criteria meeting ENERGY STAR's thermal bypass checklist. To support this activity, in 2013 the Pacific Gas & Electric Company initiated a project with Davis Energy Group (lead for the Building America team, Alliance for Residential Building Innovation) to solicit builder involvement in California to participate in field demonstrations of high performance wall systems. Builders were given incentives and design support in exchange for providing site access for construction observation, cost information, and builder survey feedback. Information from the project was designed to feed into the 2016 Title 24 process, but also to serve as an initial mechanism to engage builders in more high performance construction strategies. This Building America project utilized information collected in the California project.

  2. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  3. Normal-phase high-performance liquid chromatography of triacylglycerols.

    PubMed

    Rhodes, S H; Netting, A G

    1988-08-31

    Triacylglycerols have been separated by normal-phase high-performance liquid chromatography (HPLC) on silica utilising a solvent system consisting of dry acetonitrile-half water saturated hexane (0.7:99.3). This solvent system is UV transparent allowing detection at 200 nm and affords a separation in which retention is primarily dependent on the number of constituent double bonds. There is also a slight separation on chainlength, the longer chainlengths being eluted first. The system is therefore complementary to currently used reversed-phase HPLC systems. Chromatograms for some polyunsaturated fats and oils are given, and the most polyunsaturated triacylglycerols from linseed oil are analysed in more detail. Data are given for the separation and quantitation of the pentafluorobenzyl esters of constituent fatty acids from these triacylglycerols by a similar normal-phase HPLC system.

  4. High-performance computing in structural mechanics and engineering

    SciTech Connect

    Adeli, H.; Kamat, M.P.; Kulkarni, G.; Vanluchene, R.D. Georgia Inst. of Technology, Atlanta Montana State Univ., Bozeman )

    1993-07-01

    Recent advances in computer hardware and software have made multiprocessing a viable and attractive technology. This paper reviews high-performance computing methods in structural mechanics and engineering through the use of a new generation of multiprocessor computers. The paper presents an overview of vector pipelining, performance metrics for parallel and vector computers, programming languages, and general programming considerations. Recent developments in the application of concurrent processing techniques to the solution of structural mechanics and engineering problems are reviewed, with special emphasis on linear structural analysis, nonlinear structural analysis, transient structural analysis, dynamics of multibody flexible systems, and structural optimization. 64 refs.

  5. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  6. Fundamentals of Modeling, Data Assimilation, and High-performance Computing

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    2005-01-01

    This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.

  7. Simulated space environmental effects on some experimental high performance polymers

    NASA Technical Reports Server (NTRS)

    Connell, John W.

    1993-01-01

    High performance polymers for potential space applications were evaluated under simulated space environmental conditions. Experimental resins from blends of acetylene terminated materials, poly(arylene ether)s and low color polyimides were exposed to high energy electron and ultraviolet radiation in an attempt to simulate space environmental effects. Thin films, neat resin moldings, and carbon fiber reinforced composites were exposed, and the effect on certain polymer properties were determined. Recent research involving the effects of various radiation exposures on the physical, optical, and mechanical properties of several experimental polymer systems is reviewed.

  8. High-Performance Beam Simulator for the LANSCE Linac

    SciTech Connect

    Pang, Xiaoying; Rybarcyk, Lawrence J.; Baily, Scott A.

    2012-05-14

    A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.

  9. Strategy Guideline: Advanced Construction Documentation Recommendations for High Performance Homes

    SciTech Connect

    Lukachko, A.; Gates, C.; Straube, J.

    2011-12-01

    As whole house energy efficiency increases, new houses become less like conventional houses that were built in the past. New materials and new systems require greater coordination and communication between industry stakeholders. The Guideline for Construction Documents for High Performance Housing provides advice to address this need. The reader will be presented with four changes that are recommended to achieve improvements in energy efficiency, durability and health in Building America houses: create coordination drawings, improve specifications, improve detail drawings, and review drawings and prepare a Quality Control Plan.

  10. Highlighting High Performance: Adam Joseph Lewis Center for Environmental Studies, Oberlin College, Oberlin, Ohio

    SciTech Connect

    2002-11-01

    Oberlin College’s Adam Joseph Lewis Center for Environmental Studies is a high-performance building featuring an expansive photovoltaic system and a closed-loop groundwater heat pump system. Designers incorporated energy-efficient components and materials

  11. Designing High Performance Schools through Instructional Supervision.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    This paper summarizes a new paradigm of instructional supervision, which shifts the focus from individual behavior to the improvement of work processes and social system components of the school district. The proposed paradigm, the Knowledge Work Supervision model, is derived from sociotechnical systems design theory and linked to the premise that…

  12. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  13. High Performance Databases For Scientific Applications

    NASA Technical Reports Server (NTRS)

    French, James C.; Grimshaw, Andrew S.

    1997-01-01

    The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.

  14. High performance computing: Clusters, constellations, MPPs, and future directions

    SciTech Connect

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  15. Mass storage: The key to success in high performance computing

    NASA Technical Reports Server (NTRS)

    Lee, Richard R.

    1993-01-01

    There are numerous High Performance Computing & Communications Initiatives in the world today. All are determined to help solve some 'Grand Challenges' type of problem, but each appears to be dominated by the pursuit of higher and higher levels of CPU performance and interconnection bandwidth as the approach to success, without any regard to the impact of Mass Storage. My colleagues and I at Data Storage Technologies believe that all will have their performance against their goals ultimately measured by their ability to efficiently store and retrieve the 'deluge of data' created by end-users who will be using these systems to solve Scientific Grand Challenges problems, and that the issue of Mass Storage will become then the determinant of success or failure in achieving each projects goals. In today's world of High Performance Computing and Communications (HPCC), the critical path to success in solving problems can only be traveled by designing and implementing Mass Storage Systems capable of storing and manipulating the truly 'massive' amounts of data associated with solving these challenges. Within my presentation I will explore this critical issue and hypothesize solutions to this problem.

  16. High-Performance Ducts in Hot-Dry Climates

    SciTech Connect

    Hoeschele, Marc; Chitwood, Rick; German, Alea; Weitzel, Elizabeth

    2015-07-30

    Duct thermal losses and air leakage have long been recognized as prime culprits in the degradation of heating, ventilating, and air-conditioning (HVAC) system efficiency. Both the U.S. Department of Energy’s Zero Energy Ready Home program and California’s proposed 2016 Title 24 Residential Energy Efficiency Standards require that ducts be installed within conditioned space or that other measures be taken to provide similar improvements in delivery effectiveness (DE). Pacific Gas & Electric Company commissioned a study to evaluate ducts in conditioned space and high-performance attics (HPAs) in support of the proposed codes and standards enhancements included in California’s 2016 Title 24 Residential Energy Efficiency Standards. The goal was to work with a select group of builders to design and install high-performance duct (HPD) systems, such as ducts in conditioned space (DCS), in one or more of their homes and to obtain test data to verify the improvement in DE compared to standard practice. Davis Energy Group (DEG) helped select the builders and led a team that provided information about HPD strategies to them. DEG also observed the construction process, completed testing, and collected cost data.

  17. Understanding and Improving High-Performance I/O Subsystems

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James

    1996-01-01

    This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.

  18. Maintaining safety and high performance on shiftwork

    NASA Technical Reports Server (NTRS)

    Monk, T. H.; Folkard, S.; Wedderburn, A. I.

    1996-01-01

    This review of the shiftwork area focuses on aspects of safety and productivity. It discusses the situations in which shiftworker performance is critical, the types of problem that can develop and the reasons why shiftworker performance can be impaired. The review ends with a discnssion of the various advantages and disadvantages of several shift rotation systems, and of other possible solutions to the problem.

  19. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  20. Highlighting High Performance: Four Times Square

    SciTech Connect

    Not Available

    2001-11-01

    4 Times Square is a 48-story environmentally responsible building in New York City. Developed by the Durst Organization, the building is the first project of its size to adopt standards for energy efficiency, indoor ecology, sustainable materials, and responsible construction, operations, and maintenance procedures. Designers used a whole-building approach--considering how the building's systems can work together most efficiently--and educated tenants on the benefits of the design.

  1. A History of High-Performance Computing

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Faster than most speedy computers. More powerful than its NASA data-processing predecessors. Able to leap large, mission-related computational problems in a single bound. Clearly, it s neither a bird nor a plane, nor does it need to don a red cape, because it s super in its own way. It's Columbia, NASA s newest supercomputer and one of the world s most powerful production/processing units. Named Columbia to honor the STS-107 Space Shuttle Columbia crewmembers, the new supercomputer is making it possible for NASA to achieve breakthroughs in science and engineering, fulfilling the Agency s missions, and, ultimately, the Vision for Space Exploration. Shortly after being built in 2004, Columbia achieved a benchmark rating of 51.9 teraflop/s on 10,240 processors, making it the world s fastest operational computer at the time of completion. Putting this speed into perspective, 20 years ago, the most powerful computer at NASA s Ames Research Center, home of the NASA Advanced Supercomputing Division (NAS), ran at a speed of about 1 gigaflop (one billion calculations per second). The Columbia supercomputer is 50,000 times faster than this computer and offers a tenfold increase in capacity over the prior system housed at Ames. What s more, Columbia is considered the world s largest Linux-based, shared-memory system. The system is offering immeasurable benefits to society and is the zenith of years of NASA/private industry collaboration that has spawned new generations of commercial, high-speed computing systems.

  2. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF). Quarterly progress report 8, October--December 1993

    SciTech Connect

    Not Available

    1994-02-01

    A concept for an advanced coal-fired combined-cycle power generating system is currently being developed. The first phase of this three-phase program consists of conducting the necessary research and development to define the system, evaluating the economic and technical feasibility of the concept, and preparing an R&D plan to develop the concept further. The power generating system being developed in this project will be an improvement over current coal-fired systems. Goals have been specified that relate to the efficiency, emissions, costs, and general operation of the system. The system proposed to meet these goals is a combined-cycle system where air for a gas turbine is indirectly heated to approximately 1800{degrees}F in furnaces fired with coal-derived fuels and then directly heated in a natural-gas-fired combustor to about 2400{degrees}F. The system is based on a pyrolyzing process that converts the coal into a low-Btu fuel gas and char. The fuel gas is relatively clean, and it is fired to heat tube surfaces that are susceptible to corrosion and problems from ash deposition. In particular, the high-temperature air heater tubes, which will need to be a ceramic material, will be located in a separate furnace or region of a furnace that is exposed to combustion products from the low-Btu fuel gas only.

  3. High-performance long wavelength superlattice infrared detectors

    NASA Astrophysics Data System (ADS)

    Soibel, Alexander; Ting, David Z.-Y.; Hill, Cory J.; Lee, Mike; Nguyen, Jean; Keo, Sam A.; Mumolo, Jason M.; Gunapala, Sarath D.

    2011-01-01

    The nearly lattice-matched InAs/GaSb/AlSb (antimonide) material system offers tremendous flexibility in realizing high-performance infrared detectors. Antimonide-based superlattice (SL) detectors can be tailor-made to have cutoff wavelengths ranging from the short wave infrared (SWIR) to the very long wave infrared (VLWIR). SL detectors are predicted to have suppressed Auger recombination rates and low interband tunneling, resulting in the suppressed dark currents. Moreover, the nearly lattice-matched antimonide material system, consisting of InAs, GaSb, AlSb and their alloys, allows for the construction of superlattice heterostructures. In particular, unipolar barriers, which blocks one carrier type without impeding the flow of the other, have been implemented in the design of SL photodetectors to realize complex heterodiodes with improved performance. Here, we report our recent efforts in achieving state-of-the-art performance in antimonide superlattice based infrared photodetectors.

  4. Power/energy use cases for high performance computing.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  5. High Performance Computing - Power Application Programming Interface Specification.

    SciTech Connect

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  6. Characterization of the crosslinking reaction in high performance phenolic resins

    NASA Astrophysics Data System (ADS)

    Patel, Jigneshkumar; Zou, Guo Xiang; Hsu, Shaw Ling; university of massachusetts/Polymer science; Engineering Team

    In this study, a combination of thermal analysis, infrared spectroscopy (near and mid) in conjunction with low field NMR, was used to characterize the crosslinking reaction involving phenol formaldehyde resin and a crosslinking agent, Hexamethylenetetramine (HMTA). The strong hydrogen bonds in the resin and the completely crystalline HMTA (Tm = 280 °C) severely hamper the crosslinking process. Yet the addition of a small amount of plasticizer can induce a highly efficient crosslinking reaction to achieve the desired mechanical properties needed in a number of high performance organic-inorganic composites. The infrared spectroscopy clarifies the dissolution process of the crystalline crosslinker and the specific interactions needed to achieve miscibility of the reactants. The thermal analysis enabled us to follow the changing mobility of the system as a function of temperature. The low field NMR with the T1 inverse recovery technique allowed us to monitor the crosslinking process directly. For the first time, it is now possible to identify the functionality of the plasticizer and correlate the crosslinked structure achieved to the macroscopic performance needed for high performance organic-inorganic composites.

  7. High-performance software MPEG video player for PCs

    NASA Astrophysics Data System (ADS)

    Eckart, Stefan

    1995-04-01

    This presentation describes the implementation of the video part of a high performance software MPEG player for PCs, capable of decoding both video and audio in real-time on a 90 MHz Pentium system. The basic program design concepts, the methods to achieve high performance, the quality versus speed trade-offs employed by the program, and performance figures, showing the contribution of the different decoding steps to the total computational effort, are presented. Several decoding stages work on up to four data words in parallel by splitting the 32 bit ALU into four virtual 8 bit ALUs. Care had to be taken to avoid arithmetic overflow in these stages. The 8 X 8 inverse DCT is based on a table driven symmetric forward-mapped algorithm which splits the IDCT into four 4 X 4 DCTs. In addition, the IDCT has been combined with the inverse quantization into a single computational step. The display process uses a fast 4 X 4 ordered dither algorithm in YUV space to quantize the 24 bit 4:2:0 YUV output of the decoder to the 8 bit color lookup table hardware of the PC.

  8. High-performance CFL downlights: The best and the brightest

    SciTech Connect

    Sardinsky, R.; Hawthorne, S.; Newcomb, J.

    1993-12-31

    Downlight fixtures -- often referred to as ``recessed cans`` -- are among the most common lighting fixtures in commercial and residential settings. As such, they represent one of the most promising targets for improving lighting energy efficiency. The authors estimate that downlight fixtures account for more than one-fifth of the 2.8 billion incandescent lighting sockets in the US, and represent about 8 percent of total direct lighting energy use. Over 30 million new fixtures of this type are sold each year in the US. With existing and foreseeable technology, nearly two-thirds of the incandescent downlights in the US are candidates for retrofit or replacement with compact fluorescent lamps (CFLs) or fixtures. The remaining one-third, however, are unlikely to ever be replaceable with CFL technology because of constraints on light output, lighting quality, size, and cost-effectiveness of CFL alternatives. High performance downlight systems using compact fluorescent lamps and incorporating advanced optical, thermal, and ballast designs use up to 75 percent less energy than conventional incandescent downlight fixtures. Many CFL downlight fixtures, however, perform poorly. In this report, the authors explore ways in which various elements of fixture design influence performance. They also describe exemplary elements of high-performance designs, and evaluate several emerging or experimental technologies that promise to further improve efficiency.

  9. Flexible body dynamic stability for high performance aircraft

    NASA Technical Reports Server (NTRS)

    Goforth, E. A.; Youssef, H. M.; Apelian, C. V.; Schroeder, S. C.

    1991-01-01

    Dynamic equations which include the effects of unsteady aerodynamic forces and a flexible body structure were developed for a free flying high performance fighter aircraft. The linear and angular deformations are assumed to be small in the body reference frame, allowing the equations to be linearized in the deformation variables. Equations for total body dynamics and flexible body dynamics are formulated using the hybrid coordinate method and integrated in a state space format. A detailed finite element model of a generic high performance fighter aircraft is used to generate the mass and stiffness matrices. Unsteady aerodynamics are represented by a rational function approximation of the doublet lattice matrices. The equations simplify for the case of constant angular rate of the body reference frame, allowing the effect of roll rate to be studied by computing the eigenvalues of the system. It is found that the rigid body modes of the aircraft are greatly affected by introducing a constant roll rate, while the effect on the flexible modes is minimal for this configuration.

  10. Developing Flexible, High Performance Polymers with Self-Healing Capabilities

    NASA Technical Reports Server (NTRS)

    Jolley, Scott T.; Williams, Martha K.; Gibson, Tracy L.; Caraccio, Anne J.

    2011-01-01

    Flexible, high performance polymers such as polyimides are often employed in aerospace applications. They typically find uses in areas where improved physical characteristics such as fire resistance, long term thermal stability, and solvent resistance are required. It is anticipated that such polymers could find uses in future long duration exploration missions as well. Their use would be even more advantageous if self-healing capability or mechanisms could be incorporated into these polymers. Such innovative approaches are currently being studied at the NASA Kennedy Space Center for use in high performance wiring systems or inflatable and habitation structures. Self-healing or self-sealing capability would significantly reduce maintenance requirements, and increase the safety and reliability performance of the systems into which these polymers would be incorporated. Many unique challenges need to be overcome in order to incorporate a self-healing mechanism into flexible, high performance polymers. Significant research into the incorporation of a self-healing mechanism into structural composites has been carried out over the past decade by a number of groups, notable among them being the University of I1linois [I]. Various mechanisms for the introduction of self-healing have been investigated. Examples of these are: 1) Microcapsule-based healant delivery. 2) Vascular network delivery. 3) Damage induced triggering of latent substrate properties. Successful self-healing has been demonstrated in structural epoxy systems with almost complete reestablishment of composite strength being achieved through the use of microcapsulation technology. However, the incorporation of a self-healing mechanism into a system in which the material is flexible, or a thin film, is much more challenging. In the case of using microencapsulation, healant core content must be small enough to reside in films less than 0.1 millimeters thick, and must overcome significant capillary and surface

  11. High-performance commercial building facades

    SciTech Connect

    Lee, Eleanor; Selkowitz, Stephen; Bazjanac, Vladimir; Inkarojrit, Vorapat; Kohler, Christian

    2002-06-01

    This study focuses on advanced building facades that use daylighting, sun control, ventilation systems, and dynamic systems. A quick perusal of the leading architectural magazines, or a discussion in most architectural firms today will eventually lead to mention of some of the innovative new buildings that are being constructed with all-glass facades. Most of these buildings are appearing in Europe, although interestingly U.S. A/E firms often have a leading role in their design. This ''emerging technology'' of heavily glazed fagades is often associated with buildings whose design goals include energy efficiency, sustainability, and a ''green'' image. While there are a number of new books on the subject with impressive photos and drawings, there is little critical examination of the actual performance of such buildings, and a generally poor understanding as to whether they achieve their performance goals, or even what those goals might be. Even if the building ''works'' it is often dangerous to take a design solution from one climate and location and transport it to a new one without a good causal understanding of how the systems work. In addition, there is a wide range of existing and emerging glazing and fenestration technologies in use in these buildings, many of which break new ground with respect to innovative structural use of glass. It is unclear as to how well many of these designs would work as currently formulated in California locations dominated by intense sunlight and seismic events. Finally, the costs of these systems are higher than normal facades, but claims of energy and productivity savings are used to justify some of them. Once again these claims, while plausible, are largely unsupported. There have been major advances in glazing and facade technology over the past 30 years and we expect to see continued innovation and product development. It is critical in this process to be able to understand which performance goals are being met by current

  12. Designing High Performance Schools (CD-ROM)

    SciTech Connect

    Not Available

    2002-10-01

    The EnergySmart Schools Design Guidelines and Best Practices Manual were written as a part of the EnergySmart Schools suite of documents, provided by the US Department of Energy, to educate school districts around the country about energy efficiency and renewable energy. Written for school administrators, design teams, and architects and engineers, the documents are designed to help those who are responsible for designing or retrofitting schools, as well as their project managers. This manual will help design staff make informed decisions about energy and environmental issues important to the school systems and communities.

  13. High-performance planar nanoscale dielectric capacitors

    NASA Astrophysics Data System (ADS)

    Özçelik, V. Ongun; Ciraci, S.

    2015-05-01

    We propose a model for planar nanoscale dielectric capacitors consisting of a single layer, insulating hexagonal boron nitride (BN) stripe placed between two metallic graphene stripes, all forming commensurately a single atomic plane. First-principles density functional calculations on these nanoscale capacitors for different levels of charging and different widths of graphene-BN stripes mark high gravimetric capacitance values, which are comparable to those of supercapacitors made from other carbon-based materials. Present nanocapacitor models allow the fabrication of series, parallel, and mixed combinations which offer potential applications in two-dimensional flexible nanoelectronics, energy storage, and heat-pressure sensing systems.

  14. High Performance Piezoelectric Actuated Gimbal (HIERAX)

    SciTech Connect

    Charles Tschaggeny; Warren Jones; Eberhard Bamberg

    2007-04-01

    This paper presents a 3-axis gimbal whose three rotational axes are actuated by a novel drive system: linear piezoelectric motors whose linear output is converted to rotation by using drive disks. Advantages of this technology are: fast response, high accelerations, dither-free actuation and backlash-free positioning. The gimbal was developed to house a laser range finder for the purpose of tracking and guiding unmanned aerial vehicles during landing maneuvers. The tilt axis was built and the test results indicate excellent performance that meets design specifications.

  15. SIMENGINE: a low-cost, high-performance platform for embedded biophysical simulations.

    PubMed

    Weinstein, Randall K; Church, Christopher T; Lebsack, Carl S; Cook, Joshua E; Sorensen, Michael E

    2009-01-01

    Numerical simulations of dynamical systems are an obvious application of high-performance computing. Unfortunately, this application is underutilized because many modelers lack the technical expertise and financial resources to leverage high-performance computing hardware. Additionally, few platforms exist that can enable high-performance computing with real-time guarantees for inclusion into embedded systems--a prerequisite for working with medical devices. Here we introduce simEngine, a platform for numerical simulations of dynamical systems that reduces modelers' programming effort, delivers simulation speeds 10-100 times faster than a conventional microprocessor, and targets high-performance hardware suitable for real-time and embedded applications. This platform consists of a high-level mathematical language used to describe the simulation, a compiler/resource scheduler that generates the high-performance implementation of the simulation, and the high-performance hardware target. In this paper we present an overview of the platform, including a network-attached embedded computing device utilizing field-programmable gate arrays (FPGAs) suitable for real-time, high-performance computing. We go on to describe an example model implementation to demonstrate the platform's performance and describe how future development will improve system performance.

  16. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  17. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  18. High Performance Radiation Transport Simulations on TITAN

    SciTech Connect

    Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P; Jarrell, Joshua J; Joubert, Wayne

    2012-01-01

    In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNL GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.

  19. High performance hand-held gas chromatograph

    SciTech Connect

    Yu, C M; Koo, J C

    2001-01-10

    Gas chromatography is a prominent technique for separating complex gases and then analyzing the relative quantities of the separate components. This analytical technique is popular with scientists in a wide range of applications, including environmental restoration for air and water pollution, and chemical and biological analysis. Today the analytical instrumentation community is to working towards moving the analysis away from the laboratory to the point of origin of the sample (''the field'') to achieve real-time data collection and lower analysis costs. The Microtechnology Center of Lawrence Livermore National Laboratory, has developed a hand-held, real-time detection gas chromatograph (GC) through Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this GC is approximately 8 pounds, and it measures 8 inches by 5 inches by 3 inches. It consumes approximately 12 watts of electrical power and has a response time on the order of 2 minutes. The current detector is a glow discharge detector with a sensitivity of parts per billion. The average retention time is about 30 to 45 seconds. Under optimum conditions, the calculated effective plate number is 40,000. The separation column in the portable GC is fabricated completely on silicon wafers. Silicon is a good thermal conductor and provides rapid heating and cooling of the column. The operational temperature can be as high as 350 degrees Celsius. The GC system is capable of rapid column temperature ramping and cooling operations. These are especially important for organic and biological analyses in the GC applications.

  20. Secrets of high-performance image display

    NASA Astrophysics Data System (ADS)

    Desormeaux, David A.

    1996-04-01

    Medical imaging companies have traditionally supplied the industry with image visualization solutions based on their own custom hardware designs. Today, more and more systems are being deployed using only off-the-shelf workstations. Two major factors are driving this change. First, workstations are delivering the functionality and performance required to replace custom hardware for an ever increasing subset of visualization techniques, while continuing to come down in cost. Second, cost pressures are forcing medical imaging companies to OEM the hardware platform and focus on what they do best -- delivering solutions to health care providers. This industry shift is challenging the workstation vendors to deliver the maximum inherent performance in their computer systems to medical imaging applications without locking the application into a specific vendor's hardware. Since extracting the maximum performance from a workstation is not always intuitively obvious and often requires vendor-specific tricks, the best way to deliver performance to an application is through an application programmer's interface (API). The Hewlett-Packard Image Visualization Library (HP-IVL) is such an API. It transparently delivers the maximum possible imaging performance on Hewlett-Packard workstations, while allowing significant portability between platforms. This paper describes the performance tricks and trade-offs made in the software implementation of HP's Image Visualization Library and how the HP Image Visualization Accelerator (HP-IVX) fits into the overall architecture.