Science.gov

Sample records for high-performance microdialysis-based system

  1. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  2. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  3. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  4. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  5. High performance solar Stirling system

    NASA Technical Reports Server (NTRS)

    Stearns, J. W.; Haglund, R.

    1981-01-01

    A full-scale Dish-Stirling system experiment, at a power level of 25 kWe, has been tested during 1981 on the Test Bed Concentrator No. 2 at the Parabolic Dish Test Site, Edwards, CA. Test components, designed and developed primarily by industrial contractors for the Department of Energy, include an advanced Stirling engine driving an induction alternator, a directly-coupled solar receiver with a natural gas combustor for hybrid operation and a breadboard control system based on a programmable controller and standard utility substation components. The experiment demonstrated practicality of the solar Stirling application and high system performance into a utility grid. This paper describes the design and its functions, and the test results obtained.

  6. High performance solar Stirling system

    NASA Astrophysics Data System (ADS)

    Stearns, J. W.; Haglund, R.

    1981-12-01

    A full-scale Dish-Stirling system experiment, at a power level of 25 kWe, has been tested during 1981 on the Test Bed Concentrator No. 2 at the Parabolic Dish Test Site, Edwards, CA. Test components, designed and developed primarily by industrial contractors for the Department of Energy, include an advanced Stirling engine driving an induction alternator, a directly-coupled solar receiver with a natural gas combustor for hybrid operation and a breadboard control system based on a programmable controller and standard utility substation components. The experiment demonstrated practicality of the solar Stirling application and high system performance into a utility grid. This paper describes the design and its functions, and the test results obtained.

  7. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  8. LANL High-Performance Data System (HPDS)

    NASA Technical Reports Server (NTRS)

    Collins, M. William; Cook, Danny; Jones, Lynn; Kluegel, Lynn; Ramsey, Cheryl

    1993-01-01

    The Los Alamos High-Performance Data System (HPDS) is being developed to meet the very large data storage and data handling requirements of a high-performance computing environment. The HPDS will consist of fast, large-capacity storage devices that are directly connected to a high-speed network and managed by software distributed in workstations. The HPDS model, the HPDS implementation approach, and experiences with a prototype disk array storage system are presented.

  9. Advanced high-performance computer system architectures

    NASA Astrophysics Data System (ADS)

    Vinogradov, V. I.

    2007-02-01

    Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

  10. Creating a High-Performance School System.

    ERIC Educational Resources Information Center

    Thompson, Scott

    2003-01-01

    Describes several critical factors of a high-performing school system such as the system holds itself accountable for the success of all its schools. Provides school district examples of critical success factors in action. Includes districts in Colorado, Washington, Texas, California, New Jersey. Discusses the role of strategic and authentic…

  11. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  12. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  13. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  14. Management issues for high performance storage systems

    SciTech Connect

    Louis, S.; Burris, R.

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  15. Flexible high-performance IR camera systems

    NASA Astrophysics Data System (ADS)

    Hoelter, Theodore R.; Petronio, Susan M.; Carralejo, Ronald J.; Frank, Jeffery D.; Graff, John H.

    1999-07-01

    Indigo Systems Corporation has developed a family of standard readout integrated circuits (ROIC) for use in IR focal plane arrays (FPAs) imaging systems. These standard ROICs are designed to provide a compete set of operating features for camera level FPA control, while also providing high performance capability with any of several detector materials. By creating a uniform electrical interface for FPAs, these standard ROICs simplify the task of FPA integration with imaging electronics and physical packages. This paper begins with a brief description of the features of four Indigo standard ROICs and continues with a description of the features, design, and measured performance of indium antimonide, quantum well IR photo- detectors and indium gallium arsenide imaging system built using the described standard ROICs.

  16. High performances imaging systems for planetary landers

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Beauvivre, S.

    2003-04-01

    Each planetary mission brings its specific needs and environmental conditions: high temperature and radiations for Mercury, shock, thermal cycles and low temperature operation for Mars, long vacuum cruise phase and very low temperature for comet nucleus. Nevertheless, all the missions share the same interests in term of low mass, low power and harsh environmental conditions. When a mission includes a lander, mass optimization is even more critical for the benefit of the overall science return. SPACE-X has developed high-performances imaging systems for Rosetta Lander and MarsExpress Lander. Future imaging systems for new exploration missions have to consider the promising micro-nano-technology developments in terms of miniaturisation, low power, wireless capabilities, etc.

  17. High performance adaptive tracking system: HPATS

    NASA Astrophysics Data System (ADS)

    Downs, James; Cannon, Randy; Segewitz, Markus; Stockum, Larry

    2005-05-01

    A high performance tracking system that adaptively adjusts the tracker algorithms and track loop parameters based on real-time scene statistics has been developed and demonstrated against realistic target scenarios. The HPATS provides the capability to acquire and track very low contrast targets in the presence of background clutter and time-vaying target conditions to sub-pixel accuracy. HPATS is applicable to both fire control and terminal guidance applications that incorporate imaging sensors. An overview of the tracking system design, simulation modeling, tracker metrics tools, and field test examples of low contrast target tracking performance is presented. The HPATS technology development included a high fidelity Integrated Flight Simulation (IFS) that modeled the end to end performance of the missile fly out, the target acquisition, the target tracking, aim-point selection, terminal guidance, and lethality.

  18. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  19. Automated microdialysis-based system for in situ microsampling and investigation of lead bioavailability in terrestrial environments under physiologically based extraction conditions.

    PubMed

    Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel

    2013-10-15

    In situ automatic microdialysis sampling under batch-flow conditions is herein proposed for the first time for expedient assessment of the kinetics of lead bioaccessibility/bioavailability in contaminated and agricultural soils exploiting the harmonized physiologically based extraction test (UBM). Capitalized upon a concentric microdialysis probe immersed in synthetic gut fluids, the miniaturized flow system is harnessed for continuous monitoring of lead transfer across the permselective microdialysis membrane to mimic the diffusive transport of metal species through the epithelium of the stomach and of the small intestine. Besides, the addition of the UBM gastrointestinal fluid surrogates at a specified time frame is fully mechanized. Distinct microdialysis probe configurations and membranes types were investigated in detail to ensure passive sampling under steady-state dialytic conditions for lead. Using a 3-cm-long polysulfone membrane with averaged molecular weight cutoff of 30 kDa in a concentric probe and a perfusate flow rate of 2.0 μL min(-1), microdialysis relative recoveries in the gastric phase were close to 100%, thereby omitting the need for probe calibration. The automatic leaching method was validated in terms of bias in the analysis of four soils with different physicochemical properties and containing a wide range of lead content (16 ± 3 to 1216 ± 42 mg kg(-1)) using mass balance assessment as a quality control tool. No significant differences between the mass balance and the total lead concentration in the suite of analyzed soils were encountered (α = 0.05). Our finding that the extraction of soil-borne lead for merely one hour in the GI phase suffices for assessment of the bioavailable fraction as a result of the fast immobilization of lead species at near-neutral conditions would assist in providing risk assessment data from the UBM test on a short notice.

  20. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  1. High performance VLSI telemetry data systems

    NASA Technical Reports Server (NTRS)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  2. A programmable MTD system with high performance

    NASA Astrophysics Data System (ADS)

    Peng, Ying-Ning; Ma, Zang-E.; Ding, Xiu-Dong; Wang, Xiu-Tan; Fu, Jeng-Yun

    A digital programmable MTD system has been developed recently. In this system slow and fast moving targets are detected by a 64-order complex FIR filter and 64-point FFT equivalent filter bank, respectively. The method which obtains land clutter CFAR threshold for every Doppler channel with very good performance is proposed. When power spectral density of land clutter has a certain cubic shape, an average signal to clutter ratio improvement factor of about 48dB could be realized in this system.

  3. High-Performance Energy Applications and Systems

    SciTech Connect

    Miller, Barton

    2014-01-01

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  4. High performance VLSI telemetry data systems

    NASA Technical Reports Server (NTRS)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA-Goddard has over the last five years developed generic ground telemetry data system elements addressing the budget limitations-driven demand for greater modularity, flexibility, and interchangeability. These design solutions, which may be characterized as a 'functional components approach', encompasses both hardware and software components; the former involve telemetry application-specific ICs for data rate requirements of up to 300 Mbps, while the latter extend to embedded local software intelligence. Attention is given to the consequences of the functional components approach for VLSI components.

  5. The high performance storage system (HPSS)

    SciTech Connect

    Kliewer, K.L.

    1995-12-31

    Ever more powerful computers and rapidly enlarging data sets require unprecedented levels of data storage and access capabilities. To help meet these requirements, the scalable, network-centered, parallel storage system HPSS was designed and is now being developed. The parallel 1/0 architecture, mechanisms, strategies and capabilities are described. The current development status and the broad applicability are illustrated through a discussion of the sites at which HPSS is now being implemented, representing a spectrum of computing environments. Planned capabilities and time scales will be provided. Some of the remarkable developments in storage media data density looming on the horizon will also be noted.

  6. Building Synergy: The Power of High Performance Work Systems.

    ERIC Educational Resources Information Center

    Gephart, Martha A.; Van Buren, Mark E.

    1996-01-01

    Suggests that high-performance work systems create the synergy that lets companies gain and keep a competitive advantage. Identifies the components of high-performance work systems and critical action steps for implementation. Describes the results companies such as Xerox, Lever Brothers, and Corning Incorporated have achieved by using them. (JOW)

  7. Building Synergy: The Power of High Performance Work Systems.

    ERIC Educational Resources Information Center

    Gephart, Martha A.; Van Buren, Mark E.

    1996-01-01

    Suggests that high-performance work systems create the synergy that lets companies gain and keep a competitive advantage. Identifies the components of high-performance work systems and critical action steps for implementation. Describes the results companies such as Xerox, Lever Brothers, and Corning Incorporated have achieved by using them. (JOW)

  8. Analysis of GlucoMen®Day: A Novel Microdialysis-based Continuous Glucose Monitor

    PubMed Central

    Kubiak, Thomas

    2010-01-01

    In this issue of Journal of Diabetes Science and Technology, Valgimigli and colleagues present promising data on the clinical accuracy of the new microdialysis-based continuous glucose monitoring device GlucoMen®Day. In this analysis, two issues are addressed: first, the established way data analyses may obscure interindividual variability in terms of a glucose monitoring system's accuracy; and second, to fully appreciate the future merits of the new system, data on accuracy, while a clearly necessary prerequisite, are not sufficient and need to be augmented by patient-reported outcome data as highlighted by recent U.S. Food and Drug Administration guidelines. PMID:20920439

  9. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  10. Toward a new metric for ranking high performance computing systems.

    SciTech Connect

    Heroux, Michael Allen; Dongarra, Jack.

    2013-06-01

    The High Performance Linpack (HPL), or Top 500, benchmark [1] is the most widely recognized and discussed metric for ranking high performance computing systems. However, HPL is increasingly unreliable as a true measure of system performance for a growing collection of important science and engineering applications. In this paper we describe a new high performance conjugate gradient (HPCG) benchmark. HPCG is composed of computations and data access patterns more commonly found in applications. Using HPCG we strive for a better correlation to real scientific application performance and expect to drive computer system design and implementation in directions that will better impact performance improvement.

  11. Storage Area Networks and The High Performance Storage System

    SciTech Connect

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  12. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  13. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  14. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  15. Class of service in the high performance storage system

    SciTech Connect

    Louis, S.; Teaff, D.

    1995-01-10

    Quality of service capabilities are commonly deployed in archival mass storage systems as one or more client-specified parameters to influence physical location of data in multi-level device hierarchies for performance or cost reasons. The capabilities of new high-performance storage architectures and the needs of data-intensive applications require better quality of service models for modern storage systems. HPSS, a new distributed, high-performance, scalable, storage system, uses a Class of Service (COS) structure to influence system behavior. The authors summarize the design objectives and functionality of HPSS and describes how COS defines a set of performance, media, and residency attributes assigned to storage objects managed by HPSS servers. COS definitions are used to provide appropriate behavior and service levels as requested (or demanded) by storage system clients. They compare the HPSS COS approach with other quality of service concepts and discuss alignment possibilities.

  16. Materials integration issues for high performance fusion power systems.

    SciTech Connect

    Smith, D. L.

    1998-01-14

    One of the primary requirements for the development of fusion as an energy source is the qualification of materials for the frost wall/blanket system that will provide high performance and exhibit favorable safety and environmental features. Both economic competitiveness and the environmental attractiveness of fusion will be strongly influenced by the materials constraints. A key aspect is the development of a compatible combination of materials for the various functions of structure, tritium breeding, coolant, neutron multiplication and other special requirements for a specific system. This paper presents an overview of key materials integration issues for high performance fusion power systems. Issues such as: chemical compatibility of structure and coolant, hydrogen/tritium interactions with the plasma facing/structure/breeder materials, thermomechanical constraints associated with coolant/structure, thermal-hydraulic requirements, and safety/environmental considerations from a systems viewpoint are presented. The major materials interactions for leading blanket concepts are discussed.

  17. Los Alamos National Laboratory's high-performance data system

    SciTech Connect

    Mercier, C.; Chorn, G.; Christman, R.; Collins, B.

    1991-01-01

    Los Alamos National Laboratory is designing a High-Performance Data System (HPDS) that will provide storage for supercomputers requiring large files and fast transfer speeds. The HPDS will meet the performance requirements by managing data transfers from high-speed storage systems connected directly to a high-speed network. File and storage management software will be distributed in workstations. Network protocols will ensure reliable, wide-area network data delivery to support long-distance distributed processing. 3 refs., 2 figs.

  18. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  19. High performance storage system at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.

    1996-04-01

    Scientific computing centers are acquiring large, distributed memory machines. With memory systems of .25 to 2.5 terabytes, these machines will deliver 1-10 teraflop computing capabilities. The need to move 10`s or 100`s of gigabytes, and the need to provide petabyte storage systems are issues that must be addressed before the year 2000. Work currently underway at Sandia addresses these issues. The High Performance Storage System (HPSS) is in limited production and the mass storage environment to support Sandia`s teraflop computer system is being constructed. 26 refs., 5 figs.

  20. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  1. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  2. Alternative High Performance Polymers for Ablative Thermal Protection Systems

    NASA Technical Reports Server (NTRS)

    Boghozian, Tane; Stackpoole, Mairead; Gonzales, Greg

    2015-01-01

    Ablative thermal protection systems are commonly used as protection from the intense heat during re-entry of a space vehicle and have been used successfully on many missions including Stardust and Mars Science Laboratory both of which used PICA - a phenolic based ablator. Historically, phenolic resin has served as the ablative polymer for many TPS systems. However, it has limitations in both processing and properties such as char yield, glass transition temperature and char stability. Therefore alternative high performance polymers are being considered including cyanate ester resin, polyimide, and polybenzoxazine. Thermal and mechanical properties of these resin systems were characterized and compared with phenolic resin.

  3. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  4. Middleware in Modern High Performance Computing System Architectures

    SciTech Connect

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2007-01-01

    A recent trend in modern high performance computing (HPC) system architectures employs ''lean'' compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC ''middleware'' software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes.

  5. Probabilistic performance-based design for high performance control systems

    NASA Astrophysics Data System (ADS)

    Micheli, Laura; Cao, Liang; Gong, Yongqiang; Cancelli, Alessandro; Laflamme, Simon; Alipour, Alice

    2017-04-01

    High performance control systems (HPCS) are advanced damping systems capable of high damping performance over a wide frequency bandwidth, ideal for mitigation of multi-hazards. They include active, semi-active, and hybrid damping systems. However, HPCS are more expensive than typical passive mitigation systems, rely on power and hardware (e.g., sensors, actuators) to operate, and require maintenance. In this paper, a life cycle cost analysis (LCA) approach is proposed to estimate the economic benefit these systems over the entire life of the structure. The novelty resides in the life cycle cost analysis in the performance based design (PBD) tailored to multi-level wind hazards. This yields a probabilistic performance-based design approach for HPCS. Numerical simulations are conducted on a building located in Boston, MA. LCA are conducted for passive control systems and HPCS, and the concept of controller robustness is demonstrated. Results highlight the promise of the proposed performance-based design procedure.

  6. Architectures for a high performance distributed operating system

    NASA Astrophysics Data System (ADS)

    Schroder, Kenneth J.; Schantz, Richard E.; Vinter, Stephen T.

    1991-03-01

    This document focused on how object oriented distributed system architecture adapt to incorporate the benefits of high performance communication technology. The introduction of fiber optic digital technology over the next decade will lead to a dramatic increase in communication bandwidth. Other technologies will lead to comparable improvements in processor, memory, and device speeds. These advances will result in changes to communication protocols, networks, and operating systems, and growing distribution of functions across multiple computers both locally and globally. These major underlying changes will have to be reflected in the design of new distributed systems. The growth trends of these related technologies are surveyed, followed by a brief characterization of the applications that will benefit from these advances. The key goals of software development in high performance environments are to enable new classes of applications by exploiting performance and to develop better application structures to encourage scalability. This report discusses such goals of distributed systems and the impact of higher performance technology on each. Detailed architectural issues are covered with specific recommendations for further investigation. Suggestions for improvements to the Cronus distributed operating system to adapt it to higher performance environments are given.

  7. The architecture of the High Performance Storage System (HPSS)

    SciTech Connect

    Teaff, D.; Coyne, B.; Watson, D.

    1995-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements of large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage systems by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  8. High performance image processing and laser beam recording system

    NASA Astrophysics Data System (ADS)

    Fanelli, A. R.

    1981-06-01

    A high-performance image processing system which includes a laser image recorder has been developed to cover a full range of digital image processing techniques and capabilities. The Digital Interactive Image Processing System (DIIPS) consists of an HP3000 series II computer and subsystems consisting of a high-speed array processor, a high-speed tape drive, a series display system, a stereo optics viewing position, a printer/plotter and a CPU link which provides the capacity for the mensuration and exploitation of digital imagery with both mono and stereo digital images. Software employed includes the Hewlett-Packard standard system software composed of operating system, utilities, compilers and standard function library packages, the standard IDIMS software, and specially developed software relating to photographic and stereo mensuration. The Ultra High Resolution Image Recorder is a modification of a standard laser beam recorder with a capability of recording in excess of 18 K pixels per image line.

  9. CVC silicon carbide high-performance optical systems

    NASA Astrophysics Data System (ADS)

    Fischer, William F., III; Foss, Colby A., Jr.

    2004-10-01

    The demand for high performance lightweight mirrors has never been greater. The coming years will require lighter and higher performance mirrors and in greater numbers than is currently available. Applications include both ground and space based telescopes, surveillance, navigation, guidance, and tracking and control systems. For instance, the total requirement for US government sponsored systems alone is projected to be greater than 200 m2/year1. Given that the total current global production capacity is on the order of 50 m2/year1, the need and opportunity to rapidly produce high quality optics is readily apparent. Key areas of concern for all these programs are not only the mission critical optical performance metrics, but also the ability to meet the timeline for deployment. As such, any potential reduction in the long lead times for manufactured optical systems and components is critical. The associated improvements with such advancements would lead to reductions in schedule and acquisition cost, as well as increased performance. Trex"s patented CVC SiC process is capable of rapidly producing high performance SiC optics for any optical system. This paper will summarize the CVC SiC production process and the current optical performance levels, as well as future areas of work.

  10. Ultra High Performance, Highly Reliable, Numeric Intensive Processors and Systems

    DTIC Science & Technology

    1989-10-01

    to design high-performance DSP/IP systems using either off-the-shelf components or application specific integrated circuitry [ ASIC ]. -9 - HSDAL . ARO...are the chirp-z transform ( CZT ) [13] and (Rader’s) Prime Factor Transform (PFT) [11]. The RNS/ CZT is being studied by a group a MITRE [14] and is given...PFT RNS/CRNS/QRNS implementation has dynamic range requirements on the order of NQ2 (vs NQ4 for the CZT and much higher for the FFT). Therefore, the

  11. High performance distributed feedback fiber laser sensor array system

    NASA Astrophysics Data System (ADS)

    He, Jun; Li, Fang; Xu, Tuanwei; Wang, Yan; Liu, Yuliang

    2009-11-01

    Distributed feedback (DFB) fiber lasers have their unique properties useful for sensing applications. This paper presents a high performance distributed feedback (DFB) fiber laser sensor array system. Four key techniques have been adopted to set up the system, including DFB fiber laser design and fabrication, interferometric wavelength shift demodulation, digital phase generated carrier (PGC) technique and dense wavelength division multiplexing (DWDM). Experimental results confirm that a high dynamic strain resolution of 305 fɛ/√Hz (@ 1 kHz) has been achieved by the proposed sensor array system. And the multiplexing of eight channel DFB fiber laser sensor array has been demonstrated. The proposed DFB fiber laser sensor array system is suitable for ultra-weak signal detection, and has potential applications in the field of petroleum seismic explorations, earthquake prediction, and security.

  12. High performance/low cost accelerator control system

    NASA Astrophysics Data System (ADS)

    Magyary, S.; Glatz, J.; Lancaster, H.; Selph, F.; Fahmie, M.; Ritchie, A.; Timossi, C.; Hinkson, C.; Benjegerdes, R.

    1980-10-01

    Implementation of a high performance computer control system tailored to the requirements of the Super HILAC accelerator is described. This system uses a distributed structure with fiber optic data links; multiple CPUs operate in parallel at each node. A large number of the latest 16 bit microcomputer boards are used to get a significant processor bandwidth. Dynamically assigned and labeled knobs together with touch screens allow a flexible and efficient operator interface. An X-Y vector graphics system allows display and labeling of real time signals as well as general plotting functions. Both the accelerator parameters and the graphics system can be driven from BASIC interactive programs in addition to the precanned user routines.

  13. Software Systems for High-performance Quantum Computing

    SciTech Connect

    Humble, Travis S; Britt, Keith A

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  14. High performance electrospinning system for fabricating highly uniform polymer nanofibers

    NASA Astrophysics Data System (ADS)

    Munir, Muhammad Miftahul; Iskandar, Ferry; Khairurrijal, Okuyama, Kikuo

    2009-02-01

    A high performance electrospinning system has been successfully developed for production of highly uniform polymer nanofibers. The electrospinning system employed a proportional-integral-derivative control action to maintain a constant current during the production of polyvinyl acetate (PVAc) nanofibers from a precursor solution prepared by dissolution of the PVAc powder in dimethyl formamide so that high uniformity of the nanofibers was achieved. It was found that the cone jet length observed at the end of the needle during the injection of the precursor solution and the average diameter of the nanofibers decreased with decreasing Q /I, where Q is the flow rate of the precursor solution of the nanofibers and I is the current flowing through the electrospinning system. A power law obtained from the relation between the average diameter and Q /I is in accordance with the theoretical model.

  15. High performance/low cost accelerator control system

    SciTech Connect

    Magyary, S.; Glatz, J.; Lancaster, H.; Selph, F.; Fahmie, M.; Ritchie, A.; Timossi, C.; Hinkson, C.; Benjegerdes, R.

    1980-10-01

    Implementation of a high performance computer control system tailored to the requirements of the SuperHILAC accelerator is described. This system uses a distributed (star-type) structure with fiber optic data links; multiple CPU's operate in parallel at each node. A large number (20) of the latest 16-bit microcomputer boards are used to get a significant processor bandwidth (exceeding that of many mini-computers) at a reasonable price. Because of the large CPU bandwidth, software costs and complexity are significantly reduced and programming can be less real-time critical. In addition all programming can be in a high level language. Dynamically assigned and labeled knobs together with touch-screens allow a flexible operator interface. An X-Y vector graphics system allows display and labeling of real-time signals as well as general plotting functions. Both the accelerator parameters and the graphics system can be driven from BASIC interactive programs in addition to the pre-canned user routines. This allows new applications to be developed quickly and efficiently by physicists, operators, etc. The system, by its very nature and design, is easily upgraded (via next generation of boards) and repaired (by swapping of boards) without a large hardware support group. This control system is now being tested on an existing beamline and is performing well. The techniques used in this system can be readily applied to industrial control systems.

  16. Development of a High Performance Acousto-Ultrasonic Scan System

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2003-03-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and themographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  17. Development of a High Performance Acousto-Ultrasonic Scan System

    NASA Astrophysics Data System (ADS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-10-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  18. Development of a High Performance Acousto-ultrasonic Scan System

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  19. Sustaining high performance: dynamic balancing in an otherwise unbalanced system.

    PubMed

    Wolf, Jason A

    2011-01-01

    As Ovid said, "There is nothing in the whole world which is permanent." It is this very premise that frames the discoveries in this chapter and the compelling paradox it has raised. What began as a question of how performance is sustained, unveiled a collection of core organizational paradoxes. The findings ultimately suggest that sustained high performance is not a permanent state an organization achieves, but rather it is through perpetual movement and dynamic balance that sustainability occurs. The idea of sustainability as movement is predicated on the ability of organizational members to move beyond the experience of paradox as an impediment to progress. Through holding three critical "movements"--agile/consistency, collective/individualism, and informative/inquiry--not as paradoxical, but as active polarities, the organizations in the study were able to transcend paradox, and take active steps to continuous achievement in outperforming their peers. The study, focused on a collection of hospitals across the Unites States, reveals powerful stories of care and service, of the profound grace of human capacity, and of clear actions taken to create significant results. All of this was achieved in an environment of great volatility, in essence an unbalanced system. It was the discovery of movement and ultimately of dynamic balancing that allowed the organizations to in this study to move beyond stasis to the continuous "state" of sustaining high performance.

  20. Coal-fired high performance power generating system. Final report

    SciTech Connect

    1995-08-31

    As a result of the investigations carried out during Phase 1 of the Engineering Development of Coal-Fired High-Performance Power Generation Systems (Combustion 2000), the UTRC-led Combustion 2000 Team is recommending the development of an advanced high performance power generation system (HIPPS) whose high efficiency and minimal pollutant emissions will enable the US to use its abundant coal resources to satisfy current and future demand for electric power. The high efficiency of the power plant, which is the key to minimizing the environmental impact of coal, can only be achieved using a modern gas turbine system. Minimization of emissions can be achieved by combustor design, and advanced air pollution control devices. The commercial plant design described herein is a combined cycle using either a frame-type gas turbine or an intercooled aeroderivative with clean air as the working fluid. The air is heated by a coal-fired high temperature advanced furnace (HITAF). The best performance from the cycle is achieved by using a modern aeroderivative gas turbine, such as the intercooled FT4000. A simplified schematic is shown. In the UTRC HIPPS, the conversion efficiency for the heavy frame gas turbine version will be 47.4% (HHV) compared to the approximately 35% that is achieved in conventional coal-fired plants. This cycle is based on a gas turbine operating at turbine inlet temperatures approaching 2,500 F. Using an aeroderivative type gas turbine, efficiencies of over 49% could be realized in advanced cycle configuration (Humid Air Turbine, or HAT). Performance of these power plants is given in a table.

  1. A high-performance digital system for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Cui, Ziqiang; Wang, Huaxiang; Chen, Zengqiang; Xu, Yanbin; Yang, Wuqiang

    2011-05-01

    This paper describes a recently developed digital-based data acquisition system for electrical capacitance tomography (ECT). The system consists of high-capacity field-programmable gate arrays (FPGA) and fast data conversion circuits together with a specific signal processing method. In this system, digital phase-sensitive demodulation is implemented. A specific data acquisition scheme is employed to deal with residual charges in each measurement, resulting in a high signal-to-noise ratio (SNR) at high excitation frequency. A high-speed USB interface is employed between the FPGA and a host PC. Software in Visual C++ has been developed to accomplish operational functions. Various tests were performed to evaluate the system, e.g. frame rate, SNR, noise level, linearity, and static and dynamic imaging. The SNR is 60.3 dB at 1542 frames s-1 for a 12-electrode sensor. The mean absolute error between the measured capacitance and the linear fit value is 1.6 fF. The standard deviation of the measurements is in the order of 0.1 fF. The dynamic imaging test demonstrates the advantages of high temporal resolution of the system. The experimental results indicate that the digital signal processing devices can be used to construct a high-performance ECT system.

  2. High performance cluster system design for remote sensing data processing

    NASA Astrophysics Data System (ADS)

    Shi, Yuanli; Shen, Wenming; Xiong, Wencheng; Fu, Zhuo; Xiao, Rulin

    2012-10-01

    During recent years, cluster systems have played a more important role in the architecture design of high-performance computing area which is cost-effective and efficient parallel computing system able to satisfy specific computational requirements in the earth and space sciences communities. This paper presents a powerful cluster system built by Satellite Environment Center, Ministry of Environment Protection of China that is designed to process massive remote sensing data of HJ-1 satellites automatically everyday. The architecture of this cluster system including hardware device layer, network layer, OS/FS layer, middleware layer and application layer have been given. To verify the performance of our cluster system, image registration has been chose to experiment with one scene of HJ-1 CCD sensor. The experiments of imagery registration shows that it is an effective system to improve the efficiency of data processing, which could provide a response rapidly in applications that certainly demand, such as wild land fire monitoring and tracking, oil spill monitoring, military target detection, etc. Further work would focus on the comprehensive parallel design and implementations of remote sensing data processing.

  3. Integrated microfluidic systems for high-performance genetic analysis.

    PubMed

    Liu, Peng; Mathies, Richard A

    2009-10-01

    Driven by the ambitious goals of genome-related research, fully integrated microfluidic systems have developed rapidly to advance biomolecular and, in particular, genetic analysis. To produce a microsystem with high performance, several key elements must be strategically chosen, including device materials, temperature control, microfluidic control, and sample/product transport integration. We review several significant examples of microfluidic integration in DNA sequencing, gene expression analysis, pathogen detection, and forensic short tandem repeat typing. The advantages of high speed, increased sensitivity, and enhanced reliability enable these integrated microsystems to address bioanalytical challenges such as single-copy DNA sequencing, single-cell gene expression analysis, pathogen detection, and forensic identification of humans in formats that enable large-scale and point-of-analysis applications.

  4. Three-Dimensional Electrodes for High-Performance Bioelectrochemical Systems

    PubMed Central

    Yu, Yang-Yang; Zhai, Dan-Dan; Si, Rong-Wei; Sun, Jian-Zhong; Liu, Xiang; Yong, Yang-Chun

    2017-01-01

    Bioelectrochemical systems (BES) are groups of bioelectrochemical technologies and platforms that could facilitate versatile environmental and biological applications. The performance of BES is mainly determined by the key process of electron transfer at the bacteria and electrode interface, which is known as extracellular electron transfer (EET). Thus, developing novel electrodes to encourage bacteria attachment and enhance EET efficiency is of great significance. Recently, three-dimensional (3D) electrodes, which provide large specific area for bacteria attachment and macroporous structures for substrate diffusion, have emerged as a promising electrode for high-performance BES. Herein, a comprehensive review of versatile methodology developed for 3D electrode fabrication is presented. This review article is organized based on the categorization of 3D electrode fabrication strategy and BES performance comparison. In particular, the advantages and shortcomings of these 3D electrodes are presented and their future development is discussed. PMID:28054970

  5. High-performance work systems and occupational safety.

    PubMed

    Zacharatos, Anthea; Barling, Julian; Iverson, Roderick D

    2005-01-01

    Two studies were conducted investigating the relationship between high-performance work systems (HPWS) and occupational safety. In Study 1, data were obtained from company human resource and safety directors across 138 organizations. LISREL VIII results showed that an HPWS was positively related to occupational safety at the organizational level. Study 2 used data from 189 front-line employees in 2 organizations. Trust in management and perceived safety climate were found to mediate the relationship between an HPWS and safety performance measured in terms of personal-safety orientation (i.e., safety knowledge, safety motivation, safety compliance, and safety initiative) and safety incidents (i.e., injuries requiring first aid and near misses). These 2 studies provide confirmation of the important role organizational factors play in ensuring worker safety.

  6. Systems design of high performance stainless steels II. Prototype characterization

    NASA Astrophysics Data System (ADS)

    Campbell, C. E.; Olson, G. B.

    2000-10-01

    Within the framework of a systems approach, the design of a high performance stainless steel integrated processing/structure/property/performance relations with mechanistic computational models. Using multicomponent thermodynamic and diffusion software platforms, the models were integrated to design a carburizable, secondary-hardening, martensitic stainless steel for advanced gear and bearing applications. Prototype evaluation confirmed the predicted martensitic transformation temperature and the desired carburizing and tempering responses, achieving a case hardness of R c 64 in the secondary-hardened condition without case primary carbides. Comparison with a commercial carburizing stainless steel demonstrated the advantage of avoiding primary carbides to resist quench cracking associated with a martensitic start temperature gradient reversal. Based on anodic polarization measurements and salt-spray testing, the prototype composition exhibited superior corrosion resistance in comparison to the 440C stainless bearing steel, which has a significantly higher alloy Cr concentration.

  7. High performance embedded system for real-time pattern matching

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  8. Engineering Development of Coal-Fired High Performance Power Systems

    SciTech Connect

    2000-12-31

    This report presents work carried out under contract DE-AC22-95PC95144 ''Engineering Development of Coal-Fired High Performance Systems Phase II and III.'' The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47% NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input all solid wastes benign cost of electricity {le}{le} 90% of present plants Phase I, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase I also included preliminary R&D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. Phase II, had as its initial objective the development of a complete design base for the construction and operation of a HIPPS prototype plant to be constructed in Phase III. As part of a descoping initiative, the Phase III program has been eliminated and work related to the commercial plant design has been ended. The rescoped program retained a program of engineering research and development focusing on high temperature heat exchangers, e.g. HITAF development (Task 2); a rescoped Task 6 that is pertinent to Vision 21 objectives and focuses on advanced cycle analysis and optimization, integration of gas turbines into complex cycles, and repowering designs; and preparation of the Phase II Technical Report (Task 8). This rescoped program deleted all subsystem testing (Tasks 3, 4, and 5) and the development of a site-specific engineering design and test plan for the HIPPS prototype plant (Task 7). Work reported herein is from: Task 2.2 HITAF Air Heaters

  9. Study of High-Performance Satellite Bus System

    NASA Astrophysics Data System (ADS)

    Shirai, Tatsuya; Noda, Atsushi; Tsuiki, Atsuo

    2002-01-01

    Speaking of Low Earth Orbit (LEO) satellites like earth observation satellites, the light-weighing and high performance bus system will make an great contribution to mission components development.Also, the rising ratio of payload to total mass will reduce the launch cost.Office of Research and Development in National Space Development Agency of Japan (NASDA) is studying such a sophisticated satellite bus system.The system is expected to consist of the following advanced components and subsystems which in parallel have been developed from the element level by the Office. (a) Attitude control system (ACS) This subsystem will provide function to very accurately determine and control the satellite attitude with a next generation star tracker, a GPS receiver, and the onboard software to achieve this function. (b) Electric power system (EPS) This subsystem will be getting much lighter and powerful by utilizing the more efficient solar battery cell, power MOS FET, and DC/DC converter.Besides, to cumulate and supply the power, the Office will also study a Litium battery for space which is light and small enough to contribute to reducing size and weight of the EPS. (c) Onboard computing system (OCS) This computing system will provide function of high speed processing.The MPU (Multi Processing Unit) cell in the OCS is capable of executing approximately 200 MIPS (Mega Instructions Per Second).The OCS will play an important role not only enough for the ACS to function well but also enough for the image processing data to be handled. (d) Thermal control system (TCS) As a thermal control system, mission-friendly system is under study.A small hybrid fluid thermal control system that the Office is studying with a combination of mechanical pump loop and capillary pump loop will be robust to change of thermal loads and facilitate the system to control the temperature. (e) Communications system (CS) In order to transmit high rate data, the office is studying an optical link system

  10. Using distributed OLTP technology in a high performance storage system

    SciTech Connect

    Tyler, T.W.; Fisher, D.S.

    1995-03-01

    The design of scaleable mass storage systems requires various system components to be distributed across multiple processors. Most of these processes maintain persistent database-type information (i.e., metadata) on the resources they are responsible for managing (e.g., bitfiles, bitfile segments, physical volumes, virtual volumes, cartridges, etc.). These processes all participate in fulfilling end-user requests and updating metadata information. A number of challenges arise when distributed processes attempt to maintain separate metadata resources with production-level integrity and consistency. For example, when requests fail, metadata changes made by the various processes must be aborted or rolled back. When requests are successful, all metadata changes must be committed together. If all metadata changes cannot be committed together for some reason, then all metadata changes must be rolled back to the previous consistent state. Lack of metadata consistency jeopardizes storage system integrity. Distributed on-line transaction processing (OLTP) technology can be applied to distributed mass storage systems as the mechanism for managing the consistency of distributed metadata. OLTP concepts are familiar to manN, industries such as banking and financial services but are less well known and understood in scientific and technical computing. As mass storage systems and other products are designed using distributed processing and data-management strategies for performance, scalability, and/or availability reasons, distributed OLTP technology can be applied to solve the inherent challenges raised by such environments. This paper discusses the benefits in using distributed transaction processing products. Design and implementation experiences using the Encina OLTP product from Transarc in the High Performance Storage System are presented in more detail as a case study for how this technology can be applied to mass storage systems designed for distributed environments.

  11. Development of a High Performance Storage System (HPSS)

    SciTech Connect

    Kliewer, K.L.

    1996-12-27

    The overall objective of the project was the development of a parallel high performance storage software package capable of data transfer rates above 1 gigabyte/see with files of essentially unlimited size. This necessitated modules for uniquely identifying files to be stored, for establishing the appropriate locale for the file in the storage hardware, for moving the file in parallel to the selected lode, and for making possible ready access to the file when desired. And all of this must be done with absolute accuracy and reliability while ensuring security at the requisite level. Responsibility for the various modules was distributed across the participating laboratories. The central LMER responsibility was the Storage System Management (SSM) package, the software package that controls all storage and access activities and provides readily understandable and complete information concerning system status to an operator. This information includes storage and access activity in progress; the location, size, and character of all files; and warning and error messages, among others. As such, SSM must be tightly coordinated with all of the HPSS modules and components and must represent in effect, a synthesis of all. The result of this very extensive LMER effort was an SSM system that required approximately 83,000 physical lines of computer code.

  12. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  13. Sensor fusion methods for high performance active vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Collette, C.; Matichard, F.

    2015-04-01

    Sensor noise often limits the performance of active vibration isolation systems. Inertial sensors used in such systems can be selected through a wide variety of instrument noise and size characteristics. However, the most sensitive instruments are often the biggest and the heaviest. Consequently, high-performance active isolators sometimes embed many tens of kilograms in instrumentation. The weight and size of instrumentation can add unwanted constraint on the design. It tends to lower the structures natural frequencies and reduces the collocation between sensors and actuators. Both effects tend to reduce feedback control performance and stability. This paper discusses sensor fusion techniques that can be used in order to increase the control bandwidth (and/or the stability). For this, the low noise inertial instrument signal dominates the fusion at low frequency to provide vibration isolation. Other types of sensors (relative motion, smaller but noisier inertial, or force sensors) are used at higher frequencies to increase stability. Several sensor fusion configurations are studied. The paper shows the improvement that can be expected for several case studies including a rigid equipment, a flexible equipment, and a flexible equipment mounted on a flexible support structure.

  14. High-performance conjugate-gradient benchmark: A new metric for ranking high-performance computing systems

    DOE PAGES

    Dongarra, Jack; Heroux, Michael A.; Luszczek, Piotr

    2015-08-17

    Here, we describe a new high-performance conjugate-gradient (HPCG) benchmark. HPCG is composed of computations and data-access patterns commonly found in scientific applications. HPCG strives for a better correlation to existing codes from the computational science domain and to be representative of their performance. Furthermore, HPCG is meant to help drive the computer system design and implementation in directions that will better impact future performance improvement.

  15. Coal-fired high performance power generating system

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO[sub x] SO [sub x] and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW[sub e] combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO[sub x] production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  16. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH PERFORMANCE POWER SYSTEMS

    SciTech Connect

    1998-10-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, Al. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. Preliminary process design was started with respect to the integrated test program at the PSDF. All of the construction tasks at Foster Wheeler's Combustion and Environmental Test

  17. SCEC Earthquake System Science Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  18. Systems design of high-performance stainless steels

    NASA Astrophysics Data System (ADS)

    Campbell, Carelyn Elizabeth

    A systems approach has been applied to the design of high performance stainless steels. Quantitative property objectives were addressed integrating processing/structure/property relations with mechanistic models. Martensitic transformation behavior was described using the Olson-Cohen model for heterogeneous nucleation and the Ghosh-Olson solid-solution strengthening model for interfacial mobility, and incorporating an improved description of Fe-Co-Cr thermodynamic interaction. Coherent Msb2C precipitation in a BCC matrix was described, taking into account initial paraequilibrium with cementite. Using available SANS data, a composition dependent strain energy was calibrated and a composition independent interfacial energy was evaluated to predict the critical particle size versus the fraction of the reaction completed as input to strengthening theory. Multicomponent Pourbaix diagrams provided an effective tool for evaluating oxide stability; constrained equilibrium calculations correlated oxide stability to Cr enrichment in the oxide film to allow more efficient use of alloy Cr content. Multicomponent solidification simulations provided composition constraints to improve castability. Using the Thermo-Calc and DICTRA software packages, the models were integrated to design a carburizing, secondary-hardening martensitic stainless steel. Initial characterization of the prototype showed good agreement with the design models and achievement of the desired property objectives. Prototype evaluation confirmed the predicted martensitic transformation temperature and the desired carburizing response, achieving a case hardness of Rsb{c} 64 in the secondary-hardened condition without case primary carbides. Decarburization experiments suggest that the design core toughness objective (Ksb{IC} = 65 MPasurdm) can be achieved by reducing the core carbon level to 0.05 weight percent. To achieve the core toughness objective at high core strength levels requires further analysis of an

  19. High-Performance Acousto-Ultrasonic Scan System Being Developed

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Martin, Richard E.; Cosgriff, Laura M.; Gyekenyesi, Andrew L.; Kautz, Harold E.

    2003-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition and distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods, such as ultrasonic cscan, x-ray radiography, and thermographic inspection, which tend to be used primarily for discrete flaw detection. Throughout its history, AU has been used to inspect polymer matrix composites, metal matrix composites, ceramic matrix composites, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. This year, essential AU technology was reviewed. In addition, the basic hardware and software configuration for the scanner was developed, and preliminary results with the system were described. Mechanical and environmental loads applied to composite materials can cause distributed damage (as well as discrete defects) that plays a significant role in the degradation of physical properties. Such damage includes fiber/matrix debonding (interface failure), matrix microcracking, and fiber fracture and buckling. Investigations at the NASA Glenn Research Center have shown that traditional NDE scan inspection methods such as ultrasonic c-scan, x-ray imaging, and thermographic imaging tend to be more suited to discrete defect detection rather than the characterization of accumulated distributed microdamage in composites. Since AU is focused on assessing the distributed microdamage state of the material in between the sending and receiving transducers, it has proven to be quite suitable for assessing the relative composite material state. One major success story at Glenn with AU measurements has been the correlation between the ultrasonic decay rate obtained during AU

  20. High-Performance Acousto-Ultrasonic Scan System Being Developed

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Martin, Richard E.; Cosgriff, Laura M.; Gyekenyesi, Andrew L.; Kautz, Harold E.

    2003-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition and distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods, such as ultrasonic cscan, x-ray radiography, and thermographic inspection, which tend to be used primarily for discrete flaw detection. Throughout its history, AU has been used to inspect polymer matrix composites, metal matrix composites, ceramic matrix composites, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. This year, essential AU technology was reviewed. In addition, the basic hardware and software configuration for the scanner was developed, and preliminary results with the system were described. Mechanical and environmental loads applied to composite materials can cause distributed damage (as well as discrete defects) that plays a significant role in the degradation of physical properties. Such damage includes fiber/matrix debonding (interface failure), matrix microcracking, and fiber fracture and buckling. Investigations at the NASA Glenn Research Center have shown that traditional NDE scan inspection methods such as ultrasonic c-scan, x-ray imaging, and thermographic imaging tend to be more suited to discrete defect detection rather than the characterization of accumulated distributed micro-damage in composites. Since AU is focused on assessing the distributed micro-damage state of the material in between the sending and receiving transducers, it has proven to be quite suitable for assessing the relative composite material state. One major success story at Glenn with AU measurements has been the correlation between the ultrasonic decay rate obtained during AU

  1. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  2. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  3. High-Performance Scanning Acousto-Ultrasonic System

    NASA Technical Reports Server (NTRS)

    Roth, Don; Martin, Richard; Kautz, Harold; Cosgriff, Laura; Gyekenyesi, Andrew

    2006-01-01

    A high-performance scanning acousto-ultrasonic system, now undergoing development, is designed to afford enhanced capabilities for imaging microstructural features, including flaws, inside plate specimens of materials. The system is expected to be especially helpful in analyzing defects that contribute to failures in polymer- and ceramic-matrix composite materials, which are difficult to characterize by conventional scanning ultrasonic techniques and other conventional nondestructive testing techniques. Selected aspects of the acousto-ultrasonic method have been described in several NASA Tech Briefs articles in recent years. Summarizing briefly: The acousto-ultrasonic method involves the use of an apparatus like the one depicted in the figure (or an apparatus of similar functionality). Pulses are excited at one location on a surface of a plate specimen by use of a broadband transmitting ultrasonic transducer. The stress waves associated with these pulses propagate along the specimen to a receiving transducer at a different location on the same surface. Along the way, the stress waves interact with the microstructure and flaws present between the transducers. The received signal is analyzed to evaluate the microstructure and flaws. The specific variant of the acousto-ultrasonic method implemented in the present developmental system goes beyond the basic principle described above to include the following major additional features: Computer-controlled motorized translation stages are used to automatically position the transducers at specified locations. Scanning is performed in the sense that the measurement, data-acquisition, and data-analysis processes are repeated at different specified transducer locations in an array that spans the specimen surface (or a specified portion of the surface). A pneumatic actuator with a load cell is used to apply a controlled contact force. In analyzing the measurement data for each pair of transducer locations in the scan, the total

  4. Low-Cost, High-Performance Hall Thruster Support System

    NASA Technical Reports Server (NTRS)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  5. High performance quarter-inch cartridge tape systems

    NASA Technical Reports Server (NTRS)

    Schwarz, Ted

    1993-01-01

    Within the established low cost structure of Data Cartridge drive technology, it is possible to achieve nearly 1 terrabyte (10(exp 12)) of data capacity and more than 1 Gbit/sec (greater than 100 Mbytes/sec) transfer rates. The desirability to place this capability within a single cartridge will be determined by the market. The 3.5 in. or smaller form factor may suffice to serve both the current Data Cartridge market and a high performance segment. In any case, Data Cartridge Technology provides a strong sustainable technology growth path in the 21st century.

  6. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  7. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    PubMed

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  8. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  9. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    Requirements are carefully described in descriptions of systems to be acquired but often there is no requirement to provide measurements and performance monitoring to ensure that requirements are met over the long term after acceptance. A set of measurements for various UNIX-based systems will be available at the 1992 Goddard Conference on Mass Storage Systems and Technologies. The authors invite others to contribute to the set of measurements. The framework for presenting the measurements of supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them are given. Production control and database systems are also included. Though other applications and third party software systems are not addressed, it is important to measure them as well. The capability to integrate measurements from all these components from different vendors, and from the third party software systems was recognized and there are efforts to standardize a framework to do this. The measurement activity falls into the domain of management standards. Standards work is ongoing for Open Systems Interconnection (OSI) systems management; AT&T, Digital, and Hewlett-Packard are developing management systems based on this architecture even though it is not finished. Another effort is in the UNIX International Performance Management Working Group. In addition, there are the Open Systems Foundation's Distributed Management Environment and the Object Management Group. A paper comparing the OSI systems management model and the Object Management Group model has been written. The IBM world has had a capability for measurement for various IBM systems since the 1970's and different vendors were able to develop tools for analyzing and viewing these measurements. Since IBM was the only vendor, the user groups were able to lobby IBM for the kinds of measurements needed. In the UNIX world of multiple vendors, a common set of measurements will not be as easy to get.

  10. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    Requirements are carefully described in descriptions of systems to be acquired but often there is no requirement to provide measurements and performance monitoring to ensure that requirements are met over the long term after acceptance. A set of measurements for various UNIX-based systems will be available at the 1992 Goddard Conference on Mass Storage Systems and Technologies. The authors invite others to contribute to the set of measurements. The framework for presenting the measurements of supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them are given. Production control and database systems are also included. Though other applications and third party software systems are not addressed, it is important to measure them as well. The capability to integrate measurements from all these components from different vendors, and from the third party software systems was recognized and there are efforts to standardize a framework to do this. The measurement activity falls into the domain of management standards. Standards work is ongoing for Open Systems Interconnection (OSI) systems management; AT&T, Digital, and Hewlett-Packard are developing management systems based on this architecture even though it is not finished. Another effort is in the UNIX International Performance Management Working Group. In addition, there are the Open Systems Foundation's Distributed Management Environment and the Object Management Group. A paper comparing the OSI systems management model and the Object Management Group model has been written. The IBM world has had a capability for measurement for various IBM systems since the 1970's and different vendors were able to develop tools for analyzing and viewing these measurements. Since IBM was the only vendor, the user groups were able to lobby IBM for the kinds of measurements needed. In the UNIX world of multiple vendors, a common set of measurements will not be as easy to get.

  11. High performance control of harmonic instability from HVDC link system

    SciTech Connect

    Min, W.K.; Yoo, M.H.

    1995-12-31

    This paper investigates the usefulness of novel control method for HVDC link system which suffers from severe condition of low order harmonic. This control scheme is used the feedforward control method which is directly controlled dc current at dc link system. The studies of this paper are aimed to improving the dynamic response of HVdc link system in disturbances such as faults. To achieve those objectives, digital time domain simulations are employed by the electro magnetic transient program for dc system (EMTDC). This method results in stable recovery from faults at both rectifier and inverter terminal busbars for a HVdc system that is inherently unstable. It has been found to be robust and control performance has been enhanced.

  12. High-performance multimedia encryption system based on chaos.

    PubMed

    Hasimoto-Beltrán, Rogelio

    2008-06-01

    Current chaotic encryption systems in the literature do not fulfill security and performance demands for real-time multimedia communications. To satisfy these demands, we propose a generalized symmetric cryptosystem based on N independently iterated chaotic maps (N-map array) periodically perturbed with a three-level perturbation scheme and a double feedback (global and local) to increase the system's robustness to attacks. The first- and second-level perturbations make cryptosystem extremely sensitive to changes in the plaintext data since the system's output itself (ciphertext global feedback) is used in the perturbation process. Third-level perturbation is a system reset, in which the system-key and chaotic maps are replaced for totally new values. An analysis of the proposed scheme regarding its vulnerability to attacks, statistical properties, and implementation performance is presented. To the best of our knowledge we provide a secure cryptosystem with one of the highest levels of performance for real-time multimedia communications.

  13. High-performance multimedia encryption system based on chaos

    NASA Astrophysics Data System (ADS)

    Hasimoto-Beltrán, Rogelio

    2008-06-01

    Current chaotic encryption systems in the literature do not fulfill security and performance demands for real-time multimedia communications. To satisfy these demands, we propose a generalized symmetric cryptosystem based on N independently iterated chaotic maps (N-map array) periodically perturbed with a three-level perturbation scheme and a double feedback (global and local) to increase the system's robustness to attacks. The first- and second-level perturbations make cryptosystem extremely sensitive to changes in the plaintext data since the system's output itself (ciphertext global feedback) is used in the perturbation process. Third-level perturbation is a system reset, in which the system-key and chaotic maps are replaced for totally new values. An analysis of the proposed scheme regarding its vulnerability to attacks, statistical properties, and implementation performance is presented. To the best of our knowledge we provide a secure cryptosystem with one of the highest levels of performance for real-time multimedia communications.

  14. A High Performance Virtualized Seismic Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Eakins, J. A.; Reyes, J. C.; Franke, M.; Sánchez, R. F.; Cortes Muñoz, P.; Busby, R. W.; Vernon, F.; Barrientos, S. E.

    2014-12-01

    As part of a collaborative effort with the Incorporated Research Institutions for Seismology, a virtualized seismic data acquisition and processing system was recently installed at the Centro Sismológical Nacional (CSN) at the Universidad de Chile for use as part of their early warning system. Using lessons learned from the Earthscope Transportable Array project, the design of this system consists of dedicated acquisition, processing and data distribution nodes hosted on a high availability hypervisor cluster. Data is exchanged with the IRIS Data Management Center and the existing processing infrastructure at the CSN. The processing nodes are backed by 20 TB of hybrid Solid State Disk (SSD) and spinning disk storage with automatic tiering of data between the disks. As part of the installation, best practices for station metadata maintenance were discussed and applied to the existing IRIS sponsored stations, as well as over 30 new stations being added to the early warning network. Four virtual machines (VM) were configured with distinctive tasks. Two VMs are dedicated to data acquisition, one to the real-time data processing, and one as relay between data acquisition and processing systems with services for the existing earthquake revision and dissemination infrastructure. The first acquisition system connects directly to Basalt dataloggers and Q330 digitizers, managing them, and acquiring seismic data as well as state-of-health (SOH) information. As newly deployed stations become available (beyond the existing 30), this VM is configured to acquire data from them and incorporate the additonal data. The second acquisition system imports the legacy network of CSN and data streams provided by other data centers. The processing system is connected to the production and archive databases. The relay system merges all incoming data streams and obtains the processing results. Data and processing packets are available for subsequent review and dissemination by the CSN. Such

  15. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  16. Personal communication system combines high performance with miniaturization

    NASA Technical Reports Server (NTRS)

    Atlas, N. D.

    1967-01-01

    Personal communication system provides miniaturized components that incorporate high level signal characteristics plus noise rejection in both microphone and earphone circuitry. The microphone is designed to overcome such spacecraft flight problems as size, ambient noise level, and RF interference.

  17. High Performance Drying System Using Absorption Temperature Amplifier

    NASA Astrophysics Data System (ADS)

    Nishimura, Nobuya; Nomura, Tomohiro; Yabushita, Akihiro; Kashiwagi, Takao

    A computer simulation has been developed on transient drying process in order to predict the dynamic thermal performance of a new superheated steam drying system using an absorption type temperature amplifier as a steam superheater. A feature of this drying system is that one can reuse the exhausted superheated stream conventionally discharged from the dryer as a driving heat source for the generator in this heat pump. But in the transient drying process, the evaporation of moisture sharply decreases. Accordingly, it is hardly expected to reuse an exhausted superheated steam as heating source for the generator. 80 the effects of this exhausted superheated steam and of changes in hot water and the cooling water temperatures were mainly investigated checking whether this drying system can be driven directly by the low level energy of sun or waste heat. Furthermore, the system performances of this drying system were evaluated on a qualitative-basis by using the exergy efficiency. The results show that, under the transient drying conditions, the temperature boost of superheated steam is possible at a high temperature and thus the absorption type temperature amplifier can be an effective steam superheater system.

  18. Toward high performance radioisotope thermophotovoltaic systems using spectral control

    NASA Astrophysics Data System (ADS)

    Wang, Xiawa; Chan, Walker; Stelmakh, Veronika; Celanovic, Ivan; Fisher, Peter

    2016-12-01

    This work describes RTPV-PhC-1, an initial prototype for a radioisotope thermophotovoltaic (RTPV) system using a two-dimensional photonic crystal emitter and low bandgap thermophotovoltaic (TPV) cell to realize spectral control. We validated a system simulation using the measurements of RTPV-PhC-1 and its comparison setup RTPV-FlatTa-1 with the same configuration except a polished tantalum emitter. The emitter of RTPV-PhC-1 powered by an electric heater providing energy equivalent to one plutonia fuel pellet reached 950 °C with 52 W of thermal input power and produced 208 mW output power from 1 cm2 TPV cell. We compared the system performance using a photonic crystal emitter to a polished flat tantalum emitter and found that spectral control with the photonic crystal was four times more efficient. Based on the simulation, with more cell areas, better TPV cells, and improved insulation design, the system powered by a fuel pellet equivalent heat source is expected to reach an efficiency of 7.8%.

  19. American Models of High-Performance Work Systems.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    1993-01-01

    Looks at work systems that draw on quality engineering and management concepts and use incentives. Discusses how some U.S. companies improve performance and maintain high quality. Suggests that the federal government strategy should include measures to support change in production processes and promote efficient factors of production. (JOW)

  20. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  1. Low cost, high performance, self-aligning miniature optical systems

    PubMed Central

    Kester, Robert T.; Christenson, Todd; Kortum, Rebecca Richards; Tkaczyk, Tomasz S.

    2009-01-01

    The most expensive aspects in producing high quality miniature optical systems are the component costs and long assembly process. A new approach for fabricating these systems that reduces both aspects through the implementation of self-aligning LIGA (German acronym for lithographie, galvanoformung, abformung, or x-ray lithography, electroplating, and molding) optomechanics with high volume plastic injection molded and off-the-shelf glass optics is presented. This zero alignment strategy has been incorporated into a miniature high numerical aperture (NA = 1.0W) microscope objective for a fiber confocal reflectance microscope. Tight alignment tolerances of less than 10 μm are maintained for all components that reside inside of a small 9 gauge diameter hypodermic tubing. A prototype system has been tested using the slanted edge modulation transfer function technique and demonstrated to have a Strehl ratio of 0.71. This universal technology is now being developed for smaller, needle-sized imaging systems and other portable point-of-care diagnostic instruments. PMID:19543344

  2. Architecture for a high-performance tele-ultrasound system

    NASA Astrophysics Data System (ADS)

    Chimiak, William J.; Rainer, Robert O.; Wolfman, Neil T.; Covitz, Wesley

    1996-05-01

    Clinical prototypes of digital tele-ultrasound systems at the Bowman Gray School of Medicine have provided insight into various design architectures. Until network equipment costs decrease, hybrid systems often provide good cost/feature mixes by using high-cost networking equipment only when digital networking is required. Within the hospital using remote ultrasound system, a video and audio router interconnects the video output of ultrasound modalities and technologist communications subsystems. This is done either manually or by remote signaling, depending on the size of the ultrasound infrastructure and the cost of a remote signaling subsystem. For extramural sites and in hospital areas too distant for cost- effective analog switching techniques, an appropriate coder/decoder (CODEC), with echo cancellation, is used to transfer the audio and visual information to a CODEC in the viewing station location. The CODECs can be T1 (1.544 Mbps) CODECs for areas that cannot be reached economically at asynchronous transfer mode (ATM) data rates. This is contingent upon the diagnostic quality of the output of the T1 CODECs. Otherwise, high-speed CODECs are used with 45 Mbps DS-3 or ATM transmission facilities. This system allows full use of existing hospital infrastructures while adapting to emerging data communications infrastructures being implemented.

  3. American Models of High-Performance Work Systems.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    1993-01-01

    Looks at work systems that draw on quality engineering and management concepts and use incentives. Discusses how some U.S. companies improve performance and maintain high quality. Suggests that the federal government strategy should include measures to support change in production processes and promote efficient factors of production. (JOW)

  4. Nanostructured microfluidic digestion system for rapid high-performance proteolysis.

    PubMed

    Cheng, Gong; Hao, Si-Jie; Yu, Xu; Zheng, Si-Yang

    2015-02-07

    A novel microfluidic protein digestion system with a nanostructured and bioactive inner surface was constructed by an easy biomimetic self-assembly strategy for rapid and effective proteolysis in 2 minutes, which is faster than the conventional overnight digestion methods. It is expected that this work would contribute to rapid online digestion in future high-throughput proteomics.

  5. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  6. A High Performance Content Based Recommender System Using Hypernym Expansion

    SciTech Connect

    Potok, Thomas E; Patton, Robert M

    2015-10-20

    There are two major limitations in content-based recommender systems, the first is accurately measuring the similarity of preferred documents to a large set of general documents, and the second is over-specialization which limits the "interesting" documents recommended from a general document set. To address these issues, we propose combining linguistic methods and term frequency methods to improve overall performance and recommendation.

  7. Resolution of a High Performance Cavity Beam Positron Monitor System

    SciTech Connect

    Walston, S.; Chung, C.; Fitsos, P.; Gronberg, J.; Ross, M.; Khainovski, O.; Kolomensky, Y.; Loscutoff, P.; Slater, M.; Thomson, M.; Ward, D.; Boogert, S.; Vogel, V.; Meller, R.; Lyapin, A.; Malton, S.; Miller, D.; Frisch, J.; Hinton, S.; May, J.; McCormick, D.; /SLAC /Caltech /KEK, Tsukuba

    2007-07-06

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved--ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  8. Resolution of a High Performance Cavity Beam Position Monitor System

    SciTech Connect

    Walston, S; Chung, C; Fitsos, P; Gronberg, J; Ross, M; Khainovski, O; Kolomensky, Y; Loscutoff, P; Slater, M; Thomson, M; Ward, D; Boogert, S; Vogel, V; Meller, R; Lyapin, A; Malton, S; Miller, D; Frisch, J; Hinton, S; May, J; McCormick, D; Smith, S; Smith, T; White, G; Orimoto, T; Hayano, H; Honda, Y; Terunuma, N; Urakawa, J

    2005-09-12

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved - ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  9. Fitting modular reconnaissance systems into modern high-performance aircraft

    NASA Astrophysics Data System (ADS)

    Stroot, Jacquelyn R.; Pingel, Leslie L.

    1990-11-01

    The installation of the Advanced Tactical Air Reconnaissance System (ATARS) in the F/A-18D(RC) presented a complex set of design challenges. At the time of the F/A-18D(RC) ATARS option exercise, the design and development of the ATARS subsystems and the parameters of the F/A-18D(RC) were essentially fixed. ATARS is to be installed in the gun bay of the F/A-18D(RC), taking up no additional room, nor adding any more weight than what was removed. The F/A-18D(RC) installation solution required innovations in mounting, cooling, and fit techniques, which made constant trade study essential. The successful installation in the F/A-18D(RC) is the result of coupling fundamental design engineering with brainstorming and nonstandard approaches to every situation. ATARS is sponsored by the Aeronautical Systems Division, Wright-Patterson AFB, Ohio. The F/A-18D(RC) installation is being funded to the Air Force by the Naval Air Systems Command, Washington, D.C.

  10. High-performance space shuttle auxiliary propellant valve system

    NASA Technical Reports Server (NTRS)

    Smith, G. M.

    1973-01-01

    Several potential valve closures for the space shuttle auxiliary propulsion system (SS/APS) were investigated analytically and experimentally in a modeling program. The most promising of these were analyzed and experimentally evaluated in a full-size functional valve test fixture of novel design. The engineering investigations conducted for both model and scale evaluations of the SS/APS valve closures and functional valve fixture are described. Preliminary designs, laboratory tests, and overall valve test fixture designs are presented, and a final recommended flightweight SS/APS valve design is presented.

  11. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Slotnick, D. L.; Mcewan, S. D.; Spry, A. J.

    1984-01-01

    An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives.

  12. A high performance pneumatic braking system for heavy vehicles

    NASA Astrophysics Data System (ADS)

    Miller, Jonathan I.; Cebon, David

    2010-12-01

    Current research into reducing actuator delays in pneumatic brake systems is opening the door for advanced anti-lock braking algorithms to be used on heavy goods vehicles. However, these algorithms require the knowledge of variables that are impractical to measure directly. This paper introduces a sliding mode braking force observer to support a sliding mode controller for air-braked heavy vehicles. The performance of the observer is examined through simulations and field testing of an articulated heavy vehicle. The observer operated robustly during single-wheel vehicle simulations, and provided reasonable estimates of surface friction from test data. The effect of brake gain errors on the controller and observer are illustrated, and a recursive least squares estimator is derived for the brake gain. The estimator converged within 0.3 s in simulations and vehicle trials.

  13. Performance analysis of memory hierachies in high performance systems

    SciTech Connect

    Yogesh, Agrawel

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  14. Dynamic Thermal Management for High-Performance Storage Systems

    SciTech Connect

    Kim, Youngjae; Gurumurthi, Dr Sudhanva; Sivasubramaniam, Anand

    2012-01-01

    Thermal-aware design of disk drives is important because high temperatures can cause reliability problems. Dynamic Thermal Management (DTM) techniques have been proposed to operate the disk at the average case temperature, rather than at the worse case by modulating the activities to avoid thermal emergencies. The thermal emergencies can be caused by unexpected events, such as fan-breaks, increased inlet air temperature, etc. One of the DTM techniques is a delay-based approach that adjusts the disk seek activities, cooling down the disk drives. Even if such a DTM approach could overcome thermal emergencies without stopping disk activity, it suffers from long delays when servicing the requests. Thus, in this chapter, we investigate the possibility of using a multispeed disk-drive (called dynamic rotations per minute (DRPM)) that dynamically modulates the rotational speed of the platter for implementing the DTM technique. Using a detailed performance and thermal simulator of a storage system, we evaluate two possible DTM policies (- time-based and watermark-based) with a DRPM disk-drive and observe that dynamic RPM modulation is effective in avoiding thermal emergencies. However, we find that the time taken to transition between different rotational speeds of the disk is critical for the effectiveness of the DRPM based DTM techniques.

  15. Building High-Performing and Improving Education Systems. Systems and Structures: Powers, Duties and Funding. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    This Review looks at the way high-performing and improving education systems share out power and responsibility. Resources--in the form of funding, capital investment or payment of salaries and other ongoing costs--are some of the main levers used to make policy happen, but are not a substitute for well thought-through and appropriate policy…

  16. NFS as a user interface to a high-performance data system

    SciTech Connect

    Mercier, C.W.

    1991-01-01

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  17. Research into the interaction between high performance and cognitive skills in an intelligent tutoring system

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.

    1991-01-01

    Two intelligent tutoring systems were developed. These tutoring systems are being used to study the effectiveness of intelligent tutoring systems in training high performance tasks and the interrelationship of high performance and cognitive tasks. The two tutoring systems, referred to as the Console Operations Tutors, were built using the same basic approach to the design of an intelligent tutoring system. This design approach allowed researchers to more rapidly implement the cognitively based tutor, the OMS Leak Detect Tutor, by using the foundation of code generated in the development of the high performance based tutor, the Manual Select Keyboard (MSK). It is believed that the approach can be further generalized to develop a generic intelligent tutoring system implementation tool.

  18. GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.

    PubMed

    Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim

    2016-08-01

    In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.

  19. PISA and High-Performing Education Systems: Explaining Singapore's Education Success

    ERIC Educational Resources Information Center

    Deng, Zongyi; Gopinathan, S.

    2016-01-01

    Singapore's remarkable performance in Programme for International Student Assessment (PISA) has placed it among the world's high-performing education systems (HPES). In the literature on HPES, its "secret formula" for education success is explained in terms of teacher quality, school leadership, system characteristics and educational…

  20. PISA and High-Performing Education Systems: Explaining Singapore's Education Success

    ERIC Educational Resources Information Center

    Deng, Zongyi; Gopinathan, S.

    2016-01-01

    Singapore's remarkable performance in Programme for International Student Assessment (PISA) has placed it among the world's high-performing education systems (HPES). In the literature on HPES, its "secret formula" for education success is explained in terms of teacher quality, school leadership, system characteristics and educational…

  1. Lithium triborate laser vaporization of the prostate using the 120 W, high performance system laser: high performance all the way?

    PubMed

    Hermanns, Thomas; Strebel, Daniel D; Hefermehl, Lukas J; Gross, Oliver; Mortezavi, Ashkan; Müller, Alexander; Eberli, Daniel; Müntener, Michael; Michel, Maurice S; Meier, Alexander H; Sulser, Tullio; Seifert, Hans-Helge

    2011-06-01

    Technical modifications of the 120 W lithium-triborate laser have been implemented to increase power output, and prevent laser fiber degradation and loss of power output during laser vaporization of the prostate. However, visible alterations at the fiber tip and the subjective impression of decreasing ablative effectiveness during lithium-triborate laser vaporization indicate that delivering constantly high laser power remains a relevant problem. Thus, we evaluated the extent of laser fiber degradation and loss of power output during 120 W lithium-triborate laser vaporization of the prostate. We investigated 46 laser fibers during routine 120 W lithium-triborate laser vaporization in 35 patients with prostatic bladder outflow obstruction. Laser beam power was measured at baseline and after the application of each 25 kJ during laser vaporization. Fiber tips were microscopically examined after the procedure. Mild to moderate degradation at the emission window occurred in all fibers, associated with a loss of power output. A steep decrease to a median power output of 57.3% of baseline was detected after applying the first 25 kJ. Median power output at the end of the defined 275 kJ lifespan of the fibers was 48.8%. Despite technical refinements of the 120 W lithium-triborate laser fiber degradation and significantly decreased power output are still detectable during the procedure. Laser fibers are not fully appropriate for the high power delivery of the new system. There is still potential for further improvement in the laser performance. Copyright © 2011 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  2. The Design and Construction of a Battery Electric Vehicle Propulsion System - High Performance Electric Kart Application

    NASA Astrophysics Data System (ADS)

    Burridge, Mark; Alahakoon, Sanath

    2017-07-01

    This paper presents an electric propulsion system designed specifically to meet the performance specification for a competition racing kart application. The paper presents the procedure for the engineering design, construction and testing of the electric powertrain of the vehicle. High performance electric Go-Kart is not an established technology within Australia. It is expected that this work will provide design guidelines for a high performance electric propulsion system with the capability of forming the basis of a competitive electric kart racing formula for Australian conditions.

  3. Advanced Concurrent Interfaces for High-Performance Multi-Media Distributed C3 Systems

    DTIC Science & Technology

    1993-03-01

    RL-TR-93-17 Final Technical Report AD-A267 051 "March 1993i II l IIII I H ’PeIle ADVANCED CONCURRENT INTERFACES FOR HIGH-PERFORMANCE MULTI- MEDIA ...DISTRIBUTED C3 SYSTEMS MIT Media Lab Sponsored by Defense Advanced Research Projects Agency JJo DARPA Order No. 8474 APPR9OVED FOR PUBLIC RELEASE...that it be returned. ADVANCED CONCURRENT INTERFACES FOR HIGH-PERFORMANCE MULTI- MEDIA DISTRIBUTED C3 SYSTEMS Nicholas P. Negroponte Dr. Richard A

  4. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  5. High-Performance Work Systems and School Effectiveness: The Case of Malaysian Secondary Schools

    ERIC Educational Resources Information Center

    Maroufkhani, Parisa; Nourani, Mohammad; Bin Boerhannoeddin, Ali

    2015-01-01

    This study focuses on the impact of high-performance work systems on the outcomes of organizational effectiveness with the mediating roles of job satisfaction and organizational commitment. In light of the importance of human resource activities in achieving organizational effectiveness, we argue that higher employees' decision-making capabilities…

  6. High Performance Work Systems and Organizational Outcomes: The Mediating Role of Information Quality.

    ERIC Educational Resources Information Center

    Preuss, Gil A.

    2003-01-01

    A study of the effect of high-performance work systems on 935 nurses and 182 nurses aides indicated that quality of decision-making information depends on workers' interpretive skills and partially mediated effects of work design and total quality management on organizational performance. Providing relevant knowledge and opportunities to use…

  7. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    ERIC Educational Resources Information Center

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  8. High Performance Work Systems and Organizational Outcomes: The Mediating Role of Information Quality.

    ERIC Educational Resources Information Center

    Preuss, Gil A.

    2003-01-01

    A study of the effect of high-performance work systems on 935 nurses and 182 nurses aides indicated that quality of decision-making information depends on workers' interpretive skills and partially mediated effects of work design and total quality management on organizational performance. Providing relevant knowledge and opportunities to use…

  9. High-Performance Work Systems and School Effectiveness: The Case of Malaysian Secondary Schools

    ERIC Educational Resources Information Center

    Maroufkhani, Parisa; Nourani, Mohammad; Bin Boerhannoeddin, Ali

    2015-01-01

    This study focuses on the impact of high-performance work systems on the outcomes of organizational effectiveness with the mediating roles of job satisfaction and organizational commitment. In light of the importance of human resource activities in achieving organizational effectiveness, we argue that higher employees' decision-making capabilities…

  10. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    ERIC Educational Resources Information Center

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  11. Unlocking the Black Box: Exploring the Link between High-Performance Work Systems and Performance

    ERIC Educational Resources Information Center

    Messersmith, Jake G.; Patel, Pankaj C.; Lepak, David P.

    2011-01-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level…

  12. Unlocking the Black Box: Exploring the Link between High-Performance Work Systems and Performance

    ERIC Educational Resources Information Center

    Messersmith, Jake G.; Patel, Pankaj C.; Lepak, David P.

    2011-01-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level…

  13. Parallel and Grid-Based Data Mining - Algorithms, Models and Systems for High-Performance KDD

    NASA Astrophysics Data System (ADS)

    Congiusta, Antonio; Talia, Domenico; Trunfio, Paolo

    Data Mining often is a computing intensive and time requiring process. For this reason, several Data Mining systems have been implemented on parallel computing platforms to achieve high performance in the analysis of large data sets. Moreover, when large data repositories are coupled with geographical distribution of data, users and systems, more sophisticated technologies are needed to implement high-performance distributed KDD systems. Since computational Grids emerged as privileged platforms for distributed computing, a growing number of Grid-based KDD systems has been proposed. In this chapter we first discuss different ways to exploit parallelism in the main Data Mining techniques and algorithms, then we discuss Grid-based KDD systems. Finally, we introduce the Knowledge Grid, an environment which makes use of standard Grid middleware to support the development of parallel and distributed knowledge discovery applications.

  14. Modular, flexible, and expandable high-performance image archiving and retrieving open-architecture system

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.

    1992-07-01

    In today''s economy, it takes significant funds to establish a high-performance image archival and retrieval system for any image application. One cost effective approach is to build the system in multiple phases but there is concern that technology is advancing rapidly and the original system may not be able to take advantage of new features. The concept of an open- architecture modular, flexible and expandable system is an essential element to achieving a high-performance image archival and retrieval system within a realistic short period of time. This paper introduces a proposal for a modular, flexible, and expandable image archival and retrieval open-architecture system to stimulate discussion and thinking. It will cover the following areas: (1) data archival and retrieval requirements such as storage capacity and data management, (2) data communication and distribution requirements using local area networks and/or wide area networks, (3) the architectural requirements such as adopting industry standards for hardware and software, and (4) an example of such open-architecture system to demonstrate the feasibility of implementing a modular, flexible, and expandable high- performance image archival and retrieval system.

  15. Evolution of a high-performance storage system based on magnetic tape instrumentation recorders

    NASA Technical Reports Server (NTRS)

    Peters, Bruce

    1993-01-01

    In order to provide transparent access to data in network computing environments, high performance storage systems are getting smarter as well as faster. Magnetic tape instrumentation recorders contain an increasing amount of intelligence in the form of software and firmware that manages the processes of capturing input signals and data, putting them on media and then reproducing or playing them back. Such intelligence makes them better recorders, ideally suited for applications requiring the high-speed capture and playback of large streams of signals or data. In order to make recorders better storage systems, intelligence is also being added to provide appropriate computer and network interfaces along with services that enable them to interoperate with host computers or network client and server entities. Thus, recorders are evolving into high-performance storage systems that become an integral part of a shared information system. Data tape has embarked on a program with the Caltech sponsored Concurrent Supercomputer Consortium to develop a smart mass storage system. Working within the framework of the emerging IEEE Mass Storage System Reference Model, a high-performance storage system that works with the STX File Server to provide storage services for the Intel Touchstone Delta Supercomputer is being built. Our objective is to provide the required high storage capacity and transfer rate to support grand challenge applications, such as global climate modeling.

  16. High Performance Variable Speed Drive System and Generating System with Doubly Fed Machines

    NASA Astrophysics Data System (ADS)

    Tang, Yifan

    Doubly fed machines are another alternative for variable speed drive systems. The doubly fed machines, including doubly fed induction machine, self-cascaded induction machine and doubly excited brushless reluctance machine, have several attractive advantages for variable speed drive applications, the most important one being the significant cost reduction with a reduced power converter rating. With a better understanding, improved machine design, flexible power converters and innovated controllers, the doubly fed machines could favorably compete for many applications, which may also include variable speed power generations. The goal of this research is to enhance the attractiveness of the doubly fed machines for both variable speed drive and variable speed generator applications. Recognizing that wind power is one of the favorable clean, renewable energy sources that can contribute to the solution to the energy and environment dilemma, a novel variable-speed constant-frequency wind power generating system is proposed. By variable speed operation, energy capturing capability of the wind turbine is improved. The improvement can be further enhanced by effectively utilizing the doubly excited brushless reluctance machine in slip power recovery configuration. For the doubly fed machines, a stator flux two -axis dynamic model is established, based on which a flexible active and reactive power control strategy can be developed. High performance operation of the drive and generating systems is obtained through advanced control methods, including stator field orientation control, fuzzy logic control and adaptive fuzzy control. System studies are pursued through unified modeling, computer simulation, stability analysis and power flow analysis of the complete drive system or generating system with the machine, the converter and the control. Laboratory implementations and tested results with a digital signal processor system are also presented.

  17. Unlocking the black box: exploring the link between high-performance work systems and performance.

    PubMed

    Messersmith, Jake G; Patel, Pankaj C; Lepak, David P; Gould-Williams, Julian

    2011-11-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance. (c) 2011 APA, all rights reserved.

  18. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  19. The parallel I/O architecture of the high performance storage system (HPSS). Revision 1

    SciTech Connect

    Watson, R.W.; Coyne, R.A.

    1995-04-01

    Datasets up to terabyte size and petabyte capacities have created a serious imbalance between I/O and storage system performance and system functionality. One promising approach is the use of parallel data transfer techniques for client access to storage, peripheral-to-peripheral transfers, and remote file transfers. This paper describes the parallel I/O architecture and mechanisms, Parallel Transport Protocol (PTP), parallel FTP, and parallel client Application Programming Interface (API) used by the High Performance Storage System (HPSS). Parallel storage integration issues with a local parallel file system are also discussed.

  20. Super computers in astrophysics and High Performance simulations of self-gravitating systems

    NASA Astrophysics Data System (ADS)

    Capuzzo-Dolcetta, R.; Di Matteo, P.; Miocchi, P.

    The modern study of the dynamics of stellar systems requires the use of high-performance computers. Indeed, an accurate modelization of the structure and evolution of self-gravitating systems like planetary systems, open clusters, globular clusters and galaxies imply the evaluation of body-body interaction over the whole size of the structure, a task that is computationally very expensive, in particular when it is performed over long intervals of time. In this report we give a concise overview of the main problems of stellar systems simulations and present some exciting results we obtained about the interaction of globular clusters with the parent galaxy.

  1. A tutorial on the construction of high-performance resolution/paramodulation systems

    SciTech Connect

    Butler, R.; Overbeek, R.

    1990-09-01

    Over the past 25 years, researchers have written numerous deduction systems based on resolution and paramodulation. Of these systems, a very few have been capable of generating and maintaining a formula database'' containing more than just a few thousand clauses. These few systems were used to explore mechanisms for rapidly extracting limited subsets of relevant'' clauses. We have written this tutorial to reflect some of the best ideas that have emerged and to cast them in a form that makes them easily accessible to students wishing to write their own high-performance systems. 4 refs.

  2. Study on Walking Training System using High-Performance Shoes constructed with Rubber Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Y.; Kawanaka, S.; Kanezaki, K.; Doi, S.

    2016-09-01

    The number of accidental falls has been increasing among the elderly as society has aged. The main factor is a deteriorating center of balance due to declining physical performance. Another major factor is that the elderly tend to have bowlegged walking and their center of gravity position of the body tend to swing from side to side during walking. To find ways to counteract falls among the elderly, we developed walking training system to treat the gap in the center of balance. We also designed High-Performance Shoes that showed the status of a person's balance while walking. We also produced walk assistance from the insole in which insole stiffness corresponded to human sole distribution could be changed to correct the person's walking status. We constructed our High- Performances Shoes to detect pressure distribution during walking. Comparing normal sole distribution patterns and corrected ones, we confirmed that our assistance system helped change the user's posture, thereby reducing falls among the elderly.

  3. A High Performance Frequency Standard and Distribution System for Cassini Ka-Band Experiment

    DTIC Science & Technology

    2005-08-01

    spacecraft in a series of occultation measurements performed over a 78 day period from March to June 2005. I. INTRODUCTION The Cassini - Huygens project...successful Huygens landing on the moon Titan, the Cassini Spacecraft has begun a 3 year mission of continued moon flybys and observations. During this time...A High Performance Frequency Standard and Distribution System for Cassini Ka-Band Experiment R. T. WANG, M. D. CALHOUN, A. KIRK, W. A. DIENER

  4. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application

  5. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-02-01

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers.

  6. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals

    PubMed Central

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-01-01

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers. PMID:28230153

  7. Unconventional High-Performance Laser Protection System Based on Dichroic Dye-Doped Cholesteric Liquid Crystals.

    PubMed

    Zhang, Wanshu; Zhang, Lanying; Liang, Xiao; Le Zhou; Xiao, Jiumei; Yu, Li; Li, Fasheng; Cao, Hui; Li, Kexuan; Yang, Zhou; Yang, Huai

    2017-02-23

    High-performance and cost-effective laser protection system is of crucial importance for the rapid advance of lasers in military and civilian fields leading to severe damages of human eyes and sensitive optical devices. However, it is crucially hindered by the angle-dependent protective effect and the complex preparation process. Here we demonstrate that angle-independence, good processibility, wavelength tunability, high optical density and good visibility can be effectuated simultaneously, by embedding dichroic anthraquinone dyes in a cholesteric liquid crystal matrix. More significantly, unconventional two-dimensional parabolic protection behavior is reported for the first time that in stark contrast to the existing protection systems, the overall parabolic protection behavior enables protective effect to increase with incident angles, hence providing omnibearing high-performance protection. The protective effect is controllable by dye concentration, LC cell thickness and CLC reflection efficiency, and the system can be made flexible enabling applications in flexible and even wearable protection devices. This research creates a promising avenue for the high-performance and cost-effective laser protection, and may foster the development of optical applications such as solar concentrators, car explosion-proof membrane, smart windows and polarizers.

  8. The NetLogger Methodology for High Performance Distributed Systems Performance Analysis

    SciTech Connect

    Tierney, Brian; Johnston, William; Crowley, Brian; Hoo, Gary; Brooks, Chris; Gunter, Dan

    1999-12-23

    The authors describe a methodology that enables the real-time diagnosis of performance problems in complex high-performance distributed systems. The methodology includes tools for generating precision event logs that can be used to provide detailed end-to-end application and system level monitoring; a Java agent-based system for managing the large amount of logging data; and tools for visualizing the log data and real-time state of the distributed system. The authors developed these tools for analyzing a high-performance distributed system centered around the transfer of large amounts of data at high speeds from a distributed storage server to a remote visualization client. However, this methodology should be generally applicable to any distributed system. This methodology, called NetLogger, has proven invaluable for diagnosing problems in networks and in distributed systems code. This approach is novel in that it combines network, host, and application-level monitoring, providing a complete view of the entire system.

  9. Damage-Mitigating Control of Space Propulsion Systems for High Performance and Extended Life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang

    1994-01-01

    A major goal in the control of complex mechanical system such as spacecraft rocket engine's advanced aircraft, and power plants is to achieve high performance with increased reliability, component durability, and maintainability. The current practice of decision and control systems synthesis focuses on improving performance and diagnostic capabilities under constraints that often do not adequately represent the materials degradation. In view of the high performance requirements of the system and availability of improved materials, the lack of appropriate knowledge about the properties of these materials will lead to either less than achievable performance due to overly conservative design, or over-straining of the structure leading to unexpected failures and drastic reduction of the service life. The key idea in this report is that a significant improvement in service life could be achieved by a small reduction in the system dynamic performance. The major task is to characterize the damage generation process, and then utilize this information in a mathematical form to synthesize a control law that would meet the system requirements and simultaneously satisfy the constraints that are imposed by the material and structural properties of the critical components. The concept of damage mitigation is introduced for control of mechanical systems to achieve high performance with a prolonged life span. A model of fatigue damage dynamics is formulated in the continuous-time setting, instead of a cycle-based representation, for direct application to control systems synthesis. An optimal control policy is then formulated via nonlinear programming under specified constraints of the damage rate and accumulated damage. The results of simulation experiments for the transient upthrust of a bipropellant rocket engine are presented to demonstrate efficacy of the damage-mitigating control concept.

  10. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  11. Systems and methods for advanced ultra-high-performance InP solar cells

    DOEpatents

    Wanlass, Mark

    2017-03-07

    Systems and Methods for Advanced Ultra-High-Performance InP Solar Cells are provided. In one embodiment, an InP photovoltaic device comprises: a p-n junction absorber layer comprising at least one InP layer; a front surface confinement layer; and a back surface confinement layer; wherein either the front surface confinement layer or the back surface confinement layer forms part of a High-Low (HL) doping architecture; and wherein either the front surface confinement layer or the back surface confinement layer forms part of a heterointerface system architecture.

  12. High performance frame synchronization for continuous variable quantum key distribution systems.

    PubMed

    Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua

    2015-08-24

    Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.

  13. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    SciTech Connect

    Wang, Teng; Oral, H Sarp; Wang, Yandong; Settlemyer, Bradley W; Atchley, Scott; Yu, Weikuan

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  14. High performance work systems: the gap between policy and practice in health care reform.

    PubMed

    Leggat, Sandra G; Bartram, Timothy; Stanton, Pauline

    2011-01-01

    Studies of high-performing organisations have consistently reported a positive relationship between high performance work systems (HPWS) and performance outcomes. Although many of these studies have been conducted in manufacturing, similar findings of a positive correlation between aspects of HPWS and improved care delivery and patient outcomes have been reported in international health care studies. The purpose of this paper is to bring together the results from a series of studies conducted within Australian health care organisations. First, the authors seek to demonstrate the link found between high performance work systems and organisational performance, including the perceived quality of patient care. Second, the paper aims to show that the hospitals studied do not have the necessary aspects of HPWS in place and that there has been little consideration of HPWS in health system reform. The paper draws on a series of correlation studies using survey data from hospitals in Australia, supplemented by qualitative data collection and analysis. To demonstrate the link between HPWS and perceived quality of care delivery the authors conducted regression analysis with tests of mediation and moderation to analyse survey responses of 201 nurses in a large regional Australian health service and explored HRM and HPWS in detail in three casestudy organisations. To achieve the second aim, the authors surveyed human resource and other senior managers in all Victorian health sector organisations and reviewed policy documents related to health system reform planned for Australia. The findings suggest that there is a relationship between HPWS and the perceived quality of care that is mediated by human resource management (HRM) outcomes, such as psychological empowerment. It is also found that health care organisations in Australia generally do not have the necessary aspects of HPWS in place, creating a policy and practice gap. Although the chief executive officers of health

  15. A High Performance Parachute System for the Recovery of Small Space Capsules

    NASA Astrophysics Data System (ADS)

    Koldaev, V.; Moraes, P., Jr.

    2002-01-01

    A non-guided high performance parachute system has been developed and tested for the recovery of orbital payloads or space capsules. The system is safe, efficient and affordable to be used for small size vehicles. It is based on a pilot, a drag and a cluster of main parachutes and an air bag to reduce the impact. The system has been designed to keep a stable descent with velocity up to 10 m/s, and prevent failures. To assure the achievement of all these characteristics, the determination of the parachute canopies areas, inflation and flight dynamics have been considered by application of numerical optimisation of the system parameters. Due to the mainly empirical nature of parachute design and development, wind tunnel and flight tests were conducted in order to achieve high reliability imposed by user requirements. The present article describes the system and discusses in detail the design features and testing of the parachutes.

  16. Coal-fired high performance power generating system. Quarterly progress report, January 1--March 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    This report covers work carried out under Task 2, Concept Definition and Analysis, and Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: > 47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FHTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The cycle optimization effort has brought about several revisions to the system configuration resulting from: (1) the use of Illinois No. 6 coal instead of Utah Blind Canyon; (2) the use of coal rather than methane as a reburn fuel; (3) reducing radiant section outlet temperatures to 1700F (down from 1800F); and (4) the need to use higher performance (higher cost) steam cycles to offset losses introduced as more realistic operating and construction constraints are identified.

  17. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  18. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  19. High-performance electronics for time-of-flight PET systems

    NASA Astrophysics Data System (ADS)

    Choong, W.-S.; Peng, Q.; Vu, C. Q.; Turko, B. T.; Moses, W. W.

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  20. High-performance electronics for time-of-flight PET systems.

    PubMed

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively.

  1. High-performance electronics for time-of-flight PET systems

    PubMed Central

    Choong, W.-S.; Peng, Q.; Vu, C.Q.; Turko, B.T.; Moses, W.W.

    2014-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC’s CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC’s CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively. PMID:24575149

  2. Coupled ocean/atmosphere modeling on high-performance computing systems

    SciTech Connect

    Eltgroth, P.G.; Bolstad, J.H.; Duffy, P.B.; Mirin, A.A.; Wang, H.; Wehner, M.F.

    1996-12-01

    We investigate performance of a coupled ocean/atmosphere general circulation model on high-performance computing systems. Our programming paradigm has been domain decomposition with message- passing for distributed memory. With the emergence of SMP clusters we are investigating how to best support shaped memory as well. We consider how to assign processes to the major model components so as to obtain optimal load balance. We examine throughput on contemporary parallel architectures, such as the Cray-I3D, I3B, and the IBM-SP family.

  3. Coupled ocean/atmosphere modeling on high-performance computing systems

    SciTech Connect

    Eltgroth, P.G.; Bolstad, J.H.; Duffy, P.B.; Mirin, A.A.; Wang, H.; Wehner, M.F.

    1997-03-01

    We investigate performance of a coupled ocean/atmosphere general circulation model on high-performance computing systems. Our programming paradigm has been domain decomposition with message- passing for distributed memory. With the emergence of SMP clusters we are investigating how to best support shared memory as well. We consider how to assign processors to the major model components so as to obtain optimal load balance. We examine throughput on contemporary parallel architectures, such as the Cray-T3D/T3E and the IBM-SP family.

  4. The parallel I/O architecture of the High Performance Storage System (HPSS)

    SciTech Connect

    Watson, R.W.; Coyne, R.A.

    1995-02-01

    Rapid improvements in computational science, processing capability, main memory sizes, data collection devices, multimedia capabilities and integration of enterprise data are producing very large datasets (10s-100s of gigabytes to terabytes). This rapid growth of data has resulted in a serious imbalance in I/O and storage system performance and functionality. One promising approach to restoring balanced I/O and storage system performance is use of parallel data transfer techniques for client access to storage, device-to-device transfers, and remote file transfers. This paper describes the parallel I/O architecture and mechanisms, Parallel Transport Protocol, parallel FIP, and parallel client Application Programming Interface (API) used by the High Performance Storage System (HPSS). Parallel storage integration issues with a local parallel file system are also discussed.

  5. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  6. Towards Building High Performance Medical Image Management System for Clinical Trials

    PubMed Central

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-01-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTful Web Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems. PMID:21603096

  7. Towards Building High Performance Medical Image Management System for Clinical Trials.

    PubMed

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-01-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTful Web Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  8. Simulation, Characterization, and Optimization of Metabolic Models with the High Performance Systems Biology Toolkit

    SciTech Connect

    Lunacek, M.; Nag, A.; Alber, D. M.; Gruchalla, K.; Chang, C. H.; Graf, P. A.

    2011-01-01

    The High Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization components for metabolic modeling and the means to assemble these components into large parallel processing hierarchies suiting a particular simulation and optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be configured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on {approx}1000 processors and accomplished billions of stiff ordinary differential equation simulations. This work opens the door for the systems biology metabolic modeling community to take effective advantage of large scale high performance computing resources for the first time.

  9. Coal-fired high performance power generating system. Quarterly progress report, April 1--June 30, 1993

    SciTech Connect

    Not Available

    1993-11-01

    This report covers work carried out under Task 2, Concept Definition and Analysis, Task 3, Preliminary R&D and Task 4, Commercial Generating Plant Design, under Contract AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: >47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le}25% NSPS; cost {ge}65% of heat input; all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. A survey of currently available high temperature alloys has been completed and some of their high temperature properties are shown for comparison. Several of the most promising candidates will be selected for testing to determine corrosion resistance and high temperature strength. The corrosion resistance testing of candidate refractory coatings is continuing and some of the recent results are presented. This effort will provide important design information that will ultimately establish the operating ranges of the HITAF.

  10. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  11. a High-Performance Method for Simulating Surface Rainfall-Runoff Dynamics Using Particle System

    NASA Astrophysics Data System (ADS)

    Zhang, Fangli; Zhou, Qiming; Li, Qingquan; Wu, Guofeng; Liu, Jun

    2016-06-01

    The simulation of rainfall-runoff process is essential for disaster emergency and sustainable development. One common disadvantage of the existing conceptual hydrological models is that they are highly dependent upon specific spatial-temporal contexts. Meanwhile, due to the inter-dependence of adjacent flow paths, it is still difficult for the RS or GIS supported distributed hydrological models to achieve high-performance application in real world applications. As an attempt to improve the performance efficiencies of those models, this study presents a high-performance rainfall-runoff simulating framework based on the flow path network and a separate particle system. The vector-based flow path lines are topologically linked to constrain the movements of independent rain drop particles. A separate particle system, representing surface runoff, is involved to model the precipitation process and simulate surface flow dynamics. The trajectory of each particle is constrained by the flow path network and can be tracked by concurrent processors in a parallel cluster system. The result of speedup experiment shows that the proposed framework can significantly improve the simulating performance just by adding independent processors. By separating the catchment elements and the accumulated water, this study provides an extensible solution for improving the existing distributed hydrological models. Further, a parallel modeling and simulating platform needs to be developed and validate to be applied in monitoring real world hydrologic processes.

  12. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  13. A compilation system that integrates high performance Fortran and Fortran M

    SciTech Connect

    Foster, I.; Xu, Ming; Avalani, B.; Choudhary, A.

    1994-06-01

    Task parallelism and data parallelism are often seen as mutually exclusive approaches to parallel programming. Yet there are important classes of application, for example in multidisciplinary simulation and command and control, that would benefit from an integration of the two approaches. In this paper, we describe a programming system that we are developing to explore this sort of integration. This system builds on previous work on task-parallel and data-parallel Fortran compilers to provide an environment in which the task-parallel language Fortran M can be used to coordinate data-parallel High Performance Fortran tasks. We use an image-processing problem to illustrate the issues that arise when building an integrated compilation system of this sort.

  14. IOPro: a parallel I/O profiling and visualization framework for high-performance storage systems

    SciTech Connect

    Kim, Seong Jo; Zhang, Yuanrui; Son, Seung Woo; Kandemir, Mahmut; Liao, Wei-Keng; Thakur, Rajeev; Choudhary, Alok N.

    2015-03-01

    Efficient execution of large-scale scientific applications requires high-performance computing systems designed to meet the I/O requirements. To achieve high-performance, such data-intensive parallel applications use a multi-layer layer I/O software stack, which consists of high-level I/O libraries such as PnetCDF and HDF5, the MPI library, and parallel file systems. To design efficient parallel scientific applications, understanding the complicated flow of I/O operations and the involved interactions among the libraries is quintessential. Such comprehension helps identify I/O bottlenecks and thus exploits the potential performance in different layers of the storage hierarchy. To profile the performance of individual components in the I/O stack and to understand complex interactions among them, we have implemented a GUI-based integrated profiling and analysis framework, IOPro. IOPro automatically generates an instrumented I/O stack, runs applications on it, and visualizes detailed statistics based on the user-specified metrics of interest. We present experimental results from two different real-life applications and show how our framework can be used in practice. By generating an end-to-end trace of the whole I/O stack and pinpointing I/O interference, IOPro aids in understanding I/O behavior and improving the I/O performance significantly.

  15. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH PERFORMANCE POWER SYSTEMS PHASE II AND III

    SciTech Connect

    1998-09-30

    This report presents work carried out under contract DE-AC22-95PC95144 "Engineering Development of Coal-Fired High Performance Systems Phase II and III." The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: à thermal efficiency (HHV) >47%; à NOx, SOx, and particulates <10% NSPS (New Source Performance Standard); à coal providing >65% of heat input; à all solid wastes benign; à cost of electricity <90% of present plants. Phase I, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase I also included preliminary R&D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase II, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: à Task 2.2 HITAF Air Heaters; à Task 6 HIPPS Commercial Plant Design Update.

  16. An Empirical Examination of the Mechanisms Mediating between High-Performance Work Systems and the Performance of Japanese Organizations

    ERIC Educational Resources Information Center

    Takeuchi, Riki; Lepak, David P.; Wang, Heli; Takeuchi, Kazuo

    2007-01-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human…

  17. An Empirical Examination of the Mechanisms Mediating between High-Performance Work Systems and the Performance of Japanese Organizations

    ERIC Educational Resources Information Center

    Takeuchi, Riki; Lepak, David P.; Wang, Heli; Takeuchi, Kazuo

    2007-01-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human…

  18. Study on development system of increasing gearbox for high-performance wind-power generator

    NASA Astrophysics Data System (ADS)

    Xu, Hongbin; Yan, Kejun; Zhao, Junyu

    2005-12-01

    Based on the analysis of the development potentiality of wind-power generator and domestic manufacture of its key parts in China, an independent development system of the Increasing Gearbox for High-performance Wind-power Generator (IGHPWG) was introduced. The main elements of the system were studied, including the procedure design, design analysis system, manufacturing technology and detecting system, and the relative important technologies were analyzed such as mixed optimal joint transmission structure of the first planetary drive with two grade parallel axle drive based on equal strength, tooth root round cutting technology before milling hard tooth surface, high-precise tooth grinding technology, heat treatment optimal technology and complex surface technique, and rig test and detection technique of IGHPWG. The development conception was advanced the data share and quality assurance system through all the elements of the development system. The increasing Gearboxes for 600KW and 1MW Wind-power Generator have been successfully developed through the application of the development system.

  19. Management of Virtual Large-scale High-performance Computing Systems

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Scott, Stephen L

    2011-01-01

    Linux is widely used on high-performance computing (HPC) systems, from commodity clusters to Cray su- percomputers (which run the Cray Linux Environment). These platforms primarily differ in their system config- uration: some only use SSH to access compute nodes, whereas others employ full resource management sys- tems (e.g., Torque and ALPS on Cray XT systems). Furthermore, latest improvements in system-level virtualization techniques, such as hardware support, virtual machine migration for system resilience purposes, and reduction of virtualization overheads, enables the usage of virtual machines on HPC platforms. Currently, tools for the management of virtual machines in the context of HPC systems are still quite basic, and often tightly coupled to the target platform. In this docu- ment, we present a new system tool for the management of virtual machines in the context of large-scale HPC systems, including a run-time system and the support for all major virtualization solutions. The proposed solution is based on two key aspects. First, Virtual System Envi- ronments (VSE), introduced in a previous study, provide a flexible method to define the software environment that will be used within virtual machines. Secondly, we propose a new system run-time for the management and deployment of VSEs on HPC systems, which supports a wide range of system configurations. For instance, this generic run-time can interact with resource managers such as Torque for the management of virtual machines. Finally, the proposed solution provides appropriate ab- stractions to enable use with a variety of virtualization solutions on different Linux HPC platforms, to include Xen, KVM and the HPC oriented Palacios.

  20. State observers and Kalman filtering for high performance vibration isolation systems

    NASA Astrophysics Data System (ADS)

    Beker, M. G.; Bertolini, A.; van den Brand, J. F. J.; Bulten, H. J.; Hennes, E.; Rabeling, D. S.

    2014-03-01

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  1. State observers and Kalman filtering for high performance vibration isolation systems.

    PubMed

    Beker, M G; Bertolini, A; van den Brand, J F J; Bulten, H J; Hennes, E; Rabeling, D S

    2014-03-01

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  2. A survey on resource allocation in high performance distributed computing systems

    SciTech Connect

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  3. State observers and Kalman filtering for high performance vibration isolation systems

    SciTech Connect

    Beker, M. G. Bertolini, A.; Hennes, E.; Rabeling, D. S.; Brand, J. F. J. van den; Bulten, H. J.

    2014-03-15

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  4. A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles

    NASA Astrophysics Data System (ADS)

    Zhai, Yiwen; Zhang, Hui; Zhang, Lingling; Dong, Shaojun

    2016-05-01

    A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to Prussian white nanoparticles leading to a decrease of the absorption spectrum, was chosen as the target. And we were able to determine the concentration of sulfite in aqueous solution with a low detection limit and a broad linear relationship.A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to

  5. Multisensory systems integration for high-performance motor control in flies.

    PubMed

    Frye, Mark A

    2010-06-01

    Engineered tracking systems 'fuse' data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control.

  6. Multisensory systems integration for high-performance motor control in flies

    PubMed Central

    Frye, Mark A.

    2010-01-01

    Summary Engineered tracking systems ‘fuse’ data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control. PMID:20202821

  7. Users matter : multi-agent systems model of high performance computing cluster users.

    SciTech Connect

    North, M. J.; Hood, C. S.; Decision and Information Sciences; IIT

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex due to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.

  8. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  9. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  10. Measures and measurement of high-performance work systems in health care settings: Propositions for improvement.

    PubMed

    Etchegaray, Jason M; St John, Cynthia; Thomas, Eric J

    2011-01-01

    Given that non-health care research has demonstrated many positive outcomes for organizations using high-performance work systems (HPWSs), a closer examination of HPWSs in health care settings is warranted. We conducted a narrative review of the literature to understand how previous researchers have measured HPWSs in health care settings and what relationships exist between HPWSs and outcomes. Articles that examined HPWSs in health care settings were identified and summarized. Key discrepancies and agreements in the existing HPWS research, including definitional, conceptual, and analytical areas of interest to health services researchers, are included. The findings demonstrate that although HPWSs might be a valuable predictor of health care-related outcomes, opportunities exist for improving HPWS measurement in health care settings. Suggestions are provided to help guide future health services researchers in conducting research on HPWSs. Practice implications are provided for health care managers.

  11. Engineering development of coal-fired high-performance power systems. Technical report, July - September 1996

    SciTech Connect

    1996-11-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, AlliedSignal Aerospace Equipment Systems, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase I of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). It is a pulverized fuel-fired boiler/airheater where steam and gas turbine air are indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and then a pilot plant with integrated pyrolyzer and char combustion systems will be tested. In this report, progress in the pyrolyzer pilot plant preparation is reported. The results of extensive laboratory and bench scale testing of representative char are also reported. Preliminary results of combustion modeling of the char combustion system are included. There are also discussions of the auxiliary systems that are planned for the char combustion system pilot plant and the status of the integrated system pilot plant.

  12. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    SciTech Connect

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  13. High-performance IR thermography system based on Class II Thermal Imaging Common Modules

    NASA Astrophysics Data System (ADS)

    Bell, Ian G.

    1991-03-01

    The Class II Thermal Imaging Common Modules were originally developed for the U.K. Ministry of Defence as the basis of a number of high performance thermal imaging systems for use by the British Armed Forces. These systems are characterized by high spatial resolution, high thermal resolution and real time thermal image update rate. A TICM II thermal imaging system uses a cryogenically cooled eight element Cadmium- Mercury-Telluride (CMT) SPRITE (Signal PRocessing In The Element) detector which is mechanically scanned over the thermal scene to be viewed. The TALYTHERM system is based on a modified TICM II thermal image connected to an IBM PC-AT compatible computer having image processing hardware installed and running the T.E.M.P.S. (Thermal Emission Measurement and Processing System) software package for image processing and data analysis. The operation of a TICM II thermal imager is briefly described highlighting the use of the SPRITE detector which coupled with a serial/parallel scanning technique yields high temporal, spatial and thermal resolutions. The conversion of this military thermal image into thermography system is described, including a discussion of the modifications required to a standard imager. The technique for extracting temperature information from a real time thermal image and how this is implemented in a TALYTHERM system is described. The D.A.R.T. (Discrete Attenuation of Radiance Thermography) system which is based on an extensively modified TICM II thermal imager is also described. This system is capable of measuring temperatures up to 1000 degrees C whilst maintaining the temporal and spatial resolutions inherent in a TICM II imager. Finally applications of the TALYTHERM in areas such as NDT (Non Destructive Testing), medical research and military research are briefly described.

  14. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  15. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    SciTech Connect

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integrate these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.

  16. Development and implementation of a high-performance, cardiac-gated dual-energy imaging system

    NASA Astrophysics Data System (ADS)

    Shkumat, N. A.; Siewerdsen, J. H.; Dhanantwari, A. C.; Williams, D. B.; Richard, S.; Tward, D. J.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2007-03-01

    Mounting evidence suggests that the superposition of anatomical clutter in a projection radiograph poses a major impediment to the detectability of subtle lung nodules. Through decomposition of projections acquired at multiple kVp, dual-energy (DE) imaging offers to dramatically improve lung nodule detectability and, in part through quantitation of nodule calcification, increase specificity in nodule characterization. The development of a high-performance DE chest imaging system is reported, with design and implementation guided by fundamental imaging performance metrics. A diagnostic chest stand (Kodak RVG 5100 digital radiography system) provided the basic platform, modified to include: (i) a filter wheel, (ii) a flat-panel detector (Trixell Pixium 4600), (iii) a computer control and monitoring system for cardiac-gated acquisition, and (iv) DE image decomposition and display. Computational and experimental studies of imaging performance guided optimization of key acquisition technique parameters, including: x-ray filtration, allocation of dose between low- and high-energy projections, and kVp selection. A system for cardiac-gated acquisition was developed, directing x-ray exposures to within the quiescent period of the heart cycle, thereby minimizing anatomical misregistration. A research protocol including 200 patients imaged following lung nodule biopsy is underway, allowing preclinical evaluation of DE imaging performance relative to conventional radiography and low-dose CT.

  17. RAPID COMMUNICATION: Novel high performance small-scale thermoelectric power generation employing regenerative combustion systems

    NASA Astrophysics Data System (ADS)

    Weinberg, F. J.; Rowe, D. M.; Min, G.

    2002-07-01

    Hydrocarbon fuels have specific energy contents some two orders of magnitude greater than any electrical storage device. They therefore proffer an ideal source in the universal quest for compact, lightweight, long-lasting alternatives for batteries to power the ever-proliferating electronic devices. The motivation lies in the need to power, for example, equipment for infantry troops, for weather stations and buoys in polar regions which need to signal their readings intermittently to passing satellites, unattended over long periods, and many others. Fuel cells, converters based on miniaturized gas turbines, and other systems under intensive study, give rise to diverse practical difficulties. Thermoelectric devices are robust, durable and have no moving parts, but tend to be exceedingly inefficient. We propose regenerative combustion systems which mitigate this impediment and are likely to make high performance small-scale thermoelectric power generation applicable in practice. The efficiency of a thermoelectric generating system using preheat when operated between ambient and 1200 K is calculated to exceed the efficiency of the best present day thermoelectric conversion system by more than 20%.

  18. Damage-mitigating control of aerospace systems for high performance and extended life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang; Carpino, Marc; Lorenzo, Carl F.; Merrill, Walter C.

    1992-01-01

    The concept of damage-mitigating control is to minimize fatigue (as well as creep and corrosion) damage of critical components of mechanical structures while simultaneously maximizing the system dynamic performance. Given a dynamic model of the plant and the specifications for performance and stability robustness, the task is to synthesize a control law that would meet the system requirements and, at the same time, satisfy the constraints that are imposed by the material and structural properties of the critical components. The authors present the concept of damage-mitigating control systems design with the following objectives: (1) to achieve high performance with a prolonged life span; and (2) to systematically update the controller as the new technology of advanced materials evolves. The major challenge is to extract the information from the material properties and then utilize this information in a mathematical form so that it can be directly applied to robust control synthesis for mechanical systems. The basic concept of damage-mitigating control is illustrated using a relatively simplified model of a space shuttle main engine.

  19. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  20. Towards a smart Holter system with high performance analogue front-end and enhanced digital processing.

    PubMed

    Du, Leilei; Yan, Yan; Wu, Wenxian; Mei, Qiujun; Luo, Yu; Li, Yang; Wang, Lei

    2013-01-01

    Multiple-lead dynamic ECG recorders (Holter) play an important role in the earlier detection of various cardiovascular diseases. In this paper, we present the first several steps towards a 12-lead Holter system with high-performance AFE (Analogue Front-End) and enhanced digital processing. The system incorporates an analogue front-end chip (ADS1298 from TI), which has not yet been widely used in most commercial Holter products. A highly-efficient data management module was designated to handle the data exchange between the ADS1298 and the microprocessor (STM32L151 from ST electronics). Furthermore, the system employs a Field Programmable Gate Array (Spartan-3E from Xilinx) module, on which a dedicated real-time 227-step FIR filter was executed to improve the overall filtering performance, since the ADS1298 has no high-pass filtering capability and only allows limited low-pass filtering. The Spartan-3E FPGA is also capable of offering further on-board computational ability for a smarter Holter. The results indicate that all functional blocks work as intended. In the future, we will conduct clinical trials and compare our system with other state-of-the-arts.

  1. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  2. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  3. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  4. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  5. An empirical examination of the mechanisms mediating between high-performance work systems and the performance of Japanese organizations.

    PubMed

    Takeuchi, Riki; Lepak, David P; Wang, Heli; Takeuchi, Kazuo

    2007-07-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human capital and encourage a high degree of social exchange within an organization, and that these are positively related to the organization's overall performance. On the basis of a sample of Japanese establishments, the results provide support for the existence of these mediating mechanisms through which high-performance work systems affect overall establishment performance.

  6. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  7. Water and Power Systems Co-optimization under a High Performance Computing Framework

    NASA Astrophysics Data System (ADS)

    Xuan, Y.; Arumugam, S.; DeCarolis, J.; Mahinthakumar, K.

    2016-12-01

    Water and energy systems optimizations are traditionally being treated as two separate processes, despite their intrinsic interconnections (e.g., water is used for hydropower generation, and thermoelectric cooling requires a large amount of water withdrawal). Given the challenges of urbanization, technology uncertainty and resource constraints, and the imminent threat of climate change, a cyberinfrastructure is needed to facilitate and expedite research into the complex management of these two systems. To address these issues, we developed a High Performance Computing (HPC) framework for stochastic co-optimization of water and energy resources to inform water allocation and electricity demand. The project aims to improve conjunctive management of water and power systems under climate change by incorporating improved ensemble forecast models of streamflow and power demand. First, by downscaling and spatio-temporally disaggregating multimodel climate forecasts from General Circulation Models (GCMs), temperature and precipitation forecasts are obtained and input into multi-reservoir and power systems models. Extended from Optimus (Optimization Methods for Universal Simulators), the framework drives the multi-reservoir model and power system model, Temoa (Tools for Energy Model Optimization and Analysis), and uses Particle Swarm Optimization (PSO) algorithm to solve high dimensional stochastic problems. The utility of climate forecasts on the cost of water and power systems operations is assessed and quantified based on different forecast scenarios (i.e., no-forecast, multimodel forecast and perfect forecast). Analysis of risk management actions and renewable energy deployments will be investigated for the Catawba River basin, an area with adequate hydroclimate predicting skill and a critical basin with 11 reservoirs that supplies water and generates power for both North and South Carolina. Further research using this scalable decision supporting framework will provide

  8. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  9. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    NASA Astrophysics Data System (ADS)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  10. A high performance inverter-fed drive system of an interior permanent magnet synchronous machine

    NASA Astrophysics Data System (ADS)

    Bose, B. K.

    A high performance fully operational four-quadrant control scheme of an interior permanent magnet synchronous machine is described. The machine operates smoothly with full performance in constant-torque region, as well as in flux-weakening constant-power region in both directions of motion. The transition between constant-torque region and constant-power region is very smooth at all conditions of operation. The control in constant-torque region is based on vector or field-oriented technique with the direct-axis aligned to the total stator flux, whereas the constant-power region control is implemented by orientation of torque angle of the impressed square-wave voltage through the feedforward vector rotator. The control system is implemented digitally using distributed microcomputer system and all the essential feedback signals, such as torque, flux, etc., are estimated with precision. The control has been described with an outer torque control loop primarily for traction type applications, but speed and position control loops can be easily added to extend its application to other industrial drives. A 70 hp drive system using a Neodymium-Iron-Boron PM machine and transistor PWM inverter has been designed and extensively tested in laboratory on a dynamometer, and performances are found to be excellent.

  11. A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF

    NASA Astrophysics Data System (ADS)

    Deatrich, D. C.; Liu, S. X.; Tafirout, R.

    2010-04-01

    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.

  12. Guidelines for application of fluorescent lamps in high-performance avionic backlight systems

    NASA Astrophysics Data System (ADS)

    Syroid, Daniel D.

    1997-07-01

    Fluorescent lamps have proven to be well suited for use in high performance avionic backlight systems as demonstrated by numerous production applications for both commercial and military cockpit displays. Cockpit display applications include: Boeing 777, new 737s, F-15, F-16, F-18, F-22, C- 130, Navy P3, NASA Space Shuttle and many others. Fluorescent lamp based backlights provide high luminance, high lumen efficiency, precision chromaticity and long life for avionic active matrix liquid crystal display applications. Lamps have been produced in many sizes and shapes. Lamp diameters range from 2.6 mm to over 20 mm and lengths for the larger diameter lamps range to over one meter. Highly convoluted serpentine lamp configurations are common as are both hot and cold cathode electrode designs. This paper will review fluorescent lamp operating principles, discuss typical requirements for avionic grade lamps, compare avionic and laptop backlight designs and provide guidelines for the proper application of lamps and performance choices that must be made to attain optimum system performance considering high luminance output, system efficiency, dimming range and cost.

  13. Systems design of high performance stainless steels I. Conceptual and computational design

    NASA Astrophysics Data System (ADS)

    Campbell, C. E.; Olson, G. B.

    2000-10-01

    Application of a systems approach to the computational materials design led to the development of a high performance stainless steel. The systems approach highlighted the integration of processing/structure/property/ performance relations with mechanistic models to achieve desired quantitative property objectives. The mechanistic models applied to the martensitic transformation behavior included the Olson Cohen model for heterogeneous nucleation and the Ghosh Olson solid-solution strengthening model for interfacial mobility. Strengthening theory employed modeling of the coherent M2C precipitation in a BCC matrix, which is initially in a paraequilibrium with cementite condition. The calibration of the M2C coherency used available small-angle neutron scattering (SANS) data to determine a composition-dependent strain energy and a composition-independent interfacial energy. Multicomponent pH-potential diagrams provided an effective tool for evaluating oxide stability. Constrained equilibrium calculations correlated oxide stability to Cr enrichment in the metastable spinel film, allowing more efficient use of alloy Cr content. The composition constraints acquired from multicomponent solidification simulations improved castability. Then integration of the models, using multicomponent thermodynamic and diffusion software programs, enabled the design of a carburizable, secondary-hardening martensitic stainless steel for advanced bearing applications.

  14. Detection of HEMA in self-etching adhesive systems with high performance liquid chromatography

    NASA Astrophysics Data System (ADS)

    Panduric, V.; Tarle, Z.; Hameršak, Z.; Stipetić, I.; Matosevic, D.; Negovetić-Mandić, V.; Prskalo, K.

    2009-04-01

    One of the factors that can decrease hydrolytic stability of self-etching adhesive systems (SEAS) is 2-hydroxymethylmethacrylate (HEMA). Due to hydrolytic instability of acidic methacrylate monomers in SEAS, HEMA can be present even if the manufacturer did not include it in original composition. The aim of the study was to determine the presence of HEMA because of decomposition by hydrolysis of methacrylates during storage, resulting with loss of adhesion strength to hard dental tissues of the tooth crown. Three most commonly used SEAS were tested: AdheSE ONE, G-Bond and iBond under different storage conditions. High performance liquid chromatography analysis was performed on a Nucleosil C 18-100 5 μm (250 × 4.6 mm) column, Knauer K-501 pumps and Wellchrom DAD K-2700 detector at 215 nm. Data were collected and processed by EuroCrom 2000 HPLC software. Calibration curves were made related eluted peak area to known concentrations of HEMA (purchased from Fluka). The elution time for HEMA is 12.25 min at flow rate 1.0 ml/min. Obtained results indicate that no HEMA was present in AdheSE ONE because methacrylates are substituted with methacrylamides that seem to be more stable under acidic aqueous conditions. In all other adhesive systems HEMA was detected.

  15. Engaging Employees: The Importance of High-Performance Work Systems for Patient Safety.

    PubMed

    Etchegaray, Jason M; Thomas, Eric J

    2015-12-01

    To develop and test survey items that measure high-performance work systems (HPWSs), report psychometric characteristics of the survey, and examine associations between HPWSs and teamwork culture, safety culture, and overall patient safety grade. We reviewed literature to determine dimensions of HPWSs and then asked executives to tell us which dimensions they viewed as most important for safety and quality. We then created a HPWSs survey to measure the most important HPWSs dimensions. We administered an anonymous, electronic survey to employees with direct patient care working at a large hospital system in the Southern United States and looked for linkages between HPWSs, culture, and outcomes. Similarities existed for the HPWS practices viewed as most important by previous researchers and health-care executives. The HPWSs survey was found to be reliable, distinct from safety culture and teamwork culture based on a confirmatory factor analysis, and was the strongest predictor of the extent to which employees felt comfortable speaking up about patient safety problems as well as patient safety grade. We used information from a literature review and executive input to create a reliable and valid HPWSs survey. Future research needs to examine whether HPWSs is associated with additional safety and quality outcomes.

  16. Engineering Development of Coal-Fired High-Performance Power Systems

    SciTech Connect

    York Tsuo

    2000-12-31

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately. This report addresses the areas of technical progress for this quarter. The detail of syngas cooler design is given in this report. The final construction work of the CFB pyrolyzer pilot plant has started during this quarter. No experimental testing was performed during this quarter. The proposed test matrix for the future CFB pyrolyzer tests is given in this report. Besides testing various fuels, bed temperature will be the primary test parameter.

  17. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  18. A high performance frequency standard and distribution system for Cassini Ka-band experiment

    NASA Technical Reports Server (NTRS)

    Wang, Rabi T.; Calhoun, M. D.; Kirk, A.; Diener, W. A.; Dick, G. J.; Tjoelker, R. L.

    2005-01-01

    This paper provides an overview and update of a specialized frequency reference system for the NASA Deep Space Network (DSN) to support Ka-band radio science experiments with the Cassini spacecraft, currently orbiting Saturn. Three major components, a Hydrogen Maser, Stabilized Fiber-optic Distribution Assembly (SFODA), and 10 Kelvin Cryocooled Sapphire Oscillator (10K CSO) and frequency-lock-loop, are integrated to achieve the very high performance, ground based frequency reference at a remote antenna site located 16 km from the hydrogen maser. Typical measured Allan Deviation is 1.6 -14 1 0a't 1 second and 1.7 x 10 -15 at 1000 seconds averaging intervals. Recently two 10K CSOs have been compared in situ while operating at the remote DSN site DSS-25. The CSO references were used operationally to downconvert the Ka band downlink received from the Cassini spacecraft in a series of occultation measurements performed over a 78 day period from March to June 2005.

  19. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-01-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input; all solid wastes benign; cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  20. Pyrolytic carbon-coated stainless steel felt as a high-performance anode for bioelectrochemical systems.

    PubMed

    Guo, Kun; Hidalgo, Diana; Tommasi, Tonia; Rabaey, Korneel

    2016-07-01

    Scale up of bioelectrochemical systems (BESs) requires highly conductive, biocompatible and stable electrodes. Here we present pyrolytic carbon-coated stainless steel felt (C-SS felt) as a high-performance and scalable anode. The electrode is created by generating a carbon layer on stainless steel felt (SS felt) via a multi-step deposition process involving α-d-glucose impregnation, caramelization, and pyrolysis. Physicochemical characterizations of the surface elucidate that a thin (20±5μm) and homogenous layer of polycrystalline graphitic carbon was obtained on SS felt surface after modification. The carbon coating significantly increases the biocompatibility, enabling robust electroactive biofilm formation. The C-SS felt electrodes reach current densities (jmax) of 3.65±0.14mA/cm(2) within 7days of operation, which is 11 times higher than plain SS felt electrodes (0.30±0.04mA/cm(2)). The excellent biocompatibility, high specific surface area, high conductivity, good mechanical strength, and low cost make C-SS felt a promising electrode for BESs.

  1. High-performance immunoassays based on through-stencil patterned antibodies and capillary systems.

    PubMed

    Ziegler, Jörg; Zimmermann, Martin; Hunziker, Patrick; Delamarche, Emmanuel

    2008-03-01

    We present a simple method to pattern capture antibodies (cAbs) on poly(dimethylsiloxane) (PDMS), with high accuracy and in a manner compatible with mass fabrication for use with capillary systems (CSs), using stencils microfabricated in Si. Capture antibodies are patterned as 60-270 microm wide and 2 mm long lines on PDMS and used with CSs that have been optimized for convenient handling, pipetting of solutions, pumping of liquids, such as human blood serum, and visualization of signals for fluorescence immunoassays. With the use of this method, C-reactive protein (CRP) is detected with a sensitivity of 0.9 ng mL(-1) (7.8 pM) in 1 microL of CRP-spiked human serum, within 11 min and using only four pipetting steps and a total volume of sample and reagents of 1.35 microL. This exemplifies the high performances that can be achieved using this approach and an otherwise conventional surface sandwich fluorescence immunoassay. This method is simple and flexible and should therefore be applicable to a large number of demanding immunoassays.

  2. High-performance CMOS image sensors at BAE SYSTEMS Imaging Solutions

    NASA Astrophysics Data System (ADS)

    Vu, Paul; Fowler, Boyd; Liu, Chiao; Mims, Steve; Balicki, Janusz; Bartkovjak, Peter; Do, Hung; Li, Wang

    2012-07-01

    In this paper, we present an overview of high-performance CMOS image sensor products developed at BAE SYSTEMS Imaging Solutions designed to satisfy the increasingly challenging technical requirements for image sensors used in advanced scientific, industrial, and low light imaging applications. We discuss the design and present the test results of a family of image sensors tailored for high imaging performance and capable of delivering sub-electron readout noise, high dynamic range, low power, high frame rates, and high sensitivity. We briefly review the performance of the CIS2051, a 5.5-Mpixel image sensor, which represents our first commercial CMOS image sensor product that demonstrates the potential of our technology, then we present the performance characteristics of the CIS1021, a full HD format CMOS image sensor capable of delivering sub-electron read noise performance at 50 fps frame rate at full HD resolution. We also review the performance of the CIS1042, a 4-Mpixel image sensor which offers better than 70% QE @ 600nm combined with better than 91dB intra scene dynamic range and about 1 e- read noise at 100 fps frame rate at full resolution.

  3. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-04-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input, all solid wastes benign, and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  4. Engineering development of coal-fired high performance power systems phase 2 and 3

    SciTech Connect

    Unknown

    1999-08-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le}10% NSPS (New Source Performance Standard); coal providing {ge} 65% of heat input; all solid wastes benign; and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.2 HITAF Air Heaters; and Task 2.4 Duct Heater and Gas Turbine Integration.

  5. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1998-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard), coal providing {ge} 65% of heat input, all solid wastes benign cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAF Combustor; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  6. Analysis of starch in food systems by high-performance size exclusion chromatography.

    PubMed

    Ovando-Martínez, Maribel; Whitney, Kristin; Simsek, Senay

    2013-02-01

    Starch has unique physicochemical characteristics among food carbohydrates. Starch contributes to the physicochemical attributes of food products made from roots, legumes, cereals, and fruits. It occurs naturally as distinct particles, called granules. Most starch granules are a mixture of 2 sugar polymers: a highly branched polysaccharide named amylopectin and a basically linear polysaccharide named amylose. The starch contained in food products undergoes changes during processing, which causes changes in the starch molecular weight and amylose to amylopectin ratio. The objective of this study was to develop a new, simple, 1-step, and accurate method for simultaneous determination of amylose and amylopectin ratio as well as weight-averaged molecular weights of starch in food products. Starch from bread flour, canned peas, corn flake cereal, snack crackers, canned kidney beans, pasta, potato chips, and white bread was extracted by dissolving in KOH, urea, and precipitation with ethanol. Starch samples were solubilized and analyzed on a high-performance size exclusion chromatography (HPSEC) system. To verify the identity of the peaks, fractions were collected and soluble starch and beta-glucan assays were performed additional to gas chromatography analysis. We found that all the fractions contain only glucose and soluble starch assay is correlated to the HPSEC fractionation. This new method can be used to determine amylose amylopectin ratio and weight-averaged molecular weight of starch from various food products using as low as 25 mg dry samples. © 2013 Institute of Food Technologists®

  7. Advanced Insulation for High Performance Cost-Effective Wall, Roof, and Foundation Systems Final Report

    SciTech Connect

    Costeux, Stephane; Bunker, Shanon

    2013-12-20

    The objective of this project was to explore and potentially develop high performing insulation with increased R/inch and low impact on climate change that would help design highly insulating building envelope systems with more durable performance and lower overall system cost than envelopes with equivalent performance made with materials available today. The proposed technical approach relied on insulation foams with nanoscale pores (about 100 nm in size) in which heat transfer will be decreased. Through the development of new foaming methods, of new polymer formulations and new analytical techniques, and by advancing the understanding of how cells nucleate, expand and stabilize at the nanoscale, Dow successfully invented and developed methods to produce foams with 100 nm cells and 80% porosity by batch foaming at the laboratory scale. Measurements of the gas conductivity on small nanofoam specimen confirmed quantitatively the benefit of nanoscale cells (Knudsen effect) to increase insulation value, which was the key technical hypotheses of the program. In order to bring this technology closer to a viable semi-continuous/continuous process, the project team modified an existing continuous extrusion foaming process as well as designed and built a custom system to produce 6" x 6" foam panels. Dow demonstrated for the first time that nanofoams can be produced in a both processes. However, due to technical delays, foam characteristics achieved so far fall short of the 100 nm target set for optimal insulation foams. In parallel with the technology development, effort was directed to the determination of most promising applications for nanocellular insulation foam. Voice of Customer (VOC) exercise confirmed that demand for high-R value product will rise due to building code increased requirements in the near future, but that acceptance for novel products by building industry may be slow. Partnerships with green builders, initial launches in smaller markets (e.g. EIFS

  8. HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    SciTech Connect

    Kim, Youngjae; Gupta, Aayush; Urgaonkar, Bhuvan; Piotr, Berman; Sivasubramaniam, Anand

    2011-01-01

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based solid-state drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic hard disk drives (HDDs), and can sometimes be as slow as the HDD (due to excessive garbage collection (GC) induced by high intensity of random writes). Given these trade-offs between HDDs and SSDs in terms of cost, performance, and lifetime, the current consensus among several storage experts is to view SSDs not as a replacement for HDD but rather as a complementary device within the high-performance storage hierarchy. We design and evaluate such a hybrid system called HybridStore to provide: (a) HybridPlan: improved capacity planning technique to administrators with the overall goal of operating within cost-budgets and (b) HybridDyn: improved performance/lifetime guarantees during episodes of deviations from expected workloads through two novel mechanisms: write-regulation and fragmentation busting. As an illustrative example of HybridStore s ef cacy, HybridPlan is able to nd the most cost-effective storage con guration for a large scale workload of Microsoft Research and suggest one MLC SSD with ten 7.2K RPM HDDs instead of fourteen 7.2K RPM HDDs only. HybridDyn is able to reduce the average response time for an enterprise scale random-write dominant workload by about 71% as compared to a HDD-based system.

  9. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH-PERFORMANCE POWER SYSTEMS

    SciTech Connect

    Unknown

    1999-02-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, AL. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. A general arrangement drawing of the char transfer system was forwarded to SCS for their review. Structural steel drawings were used to generate a three-dimensional model of the char

  10. Design of a high-performance telepresence system incorporating an active vision system for enhanced visual perception of remote environments

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Asbery, Richard

    1995-12-01

    This paper describes the design, development and implementation of a telepresence system for hazardous environment applications. Its primary feature is a high performance active stereo vision system slaved to the motion of the operators head. To simulate the presence of an operator in a remote, hazardous environment, it is necessary to provide sufficient visual information about the remote environment. The operator must be able to interact with the environment so that he can carry out manipulative tasks. To achieve an enhanced sense of visual perception we have developed a tightly integrated pan and tilt stereo vision system with a head-mounted display. The motion of the operators head is monitored by a six DOF sensor which provides the demand signals to servocontrol the active vision system. The system we have developed is a compact yet high performance system employing mechatronic principles to deliver a system that can be mounted on a small mobile platform. We have also developed an open architecture controller to implement the dynamic, active vision system which exhibits dynamic performance characteristics of the human head-eye system so as to form a natural and intuitive interface. A series of tests have been conducted to establish the system latency and to explore the effectiveness of remote 3D human perception, particularly with regard to manipulation tasks and navigation. The results of these tests are presented.

  11. High performance dash on warning air mobile, missile system. [intercontinental ballistic missiles - systems analysis

    NASA Technical Reports Server (NTRS)

    Levin, A. D.; Castellano, C. R.; Hague, D. S.

    1975-01-01

    An aircraft-missile system which performs a high acceleration takeoff followed by a supersonic dash to a 'safe' distance from the launch site is presented. Topics considered are: (1) technological feasibility to the dash on warning concept; (2) aircraft and boost trajectory requirements; and (3) partial cost estimates for a fleet of aircraft which provide 200 missiles on airborne alert. Various aircraft boost propulsion systems were studied such as an unstaged cryogenic rocket, an unstaged storable liquid, and a solid rocket staged system. Various wing planforms were also studied. Vehicle gross weights are given. The results indicate that the dash on warning concept will meet expected performance criteria, and can be implemented using existing technology, such as all-aluminum aircraft and existing high-bypass-ratio turbofan engines.

  12. WDM package enabling high-bandwidth optical intrasystem interconnects for high-performance computer systems

    NASA Astrophysics Data System (ADS)

    Schrage, J.; Soenmez, Y.; Happel, T.; Gubler, U.; Lukowicz, P.; Mrozynski, G.

    2006-02-01

    From long haul, metro access and intersystem links the trend goes to applying optical interconnection technology at increasingly shorter distances. Intrasystem interconnects such as data busses between microprocessors and memory blocks are still based on copper interconnects today. This causes a bottleneck in computer systems since the achievable bandwidth of electrical interconnects is limited through the underlying physical properties. Approaches to solve this problem by embedding optical multimode polymer waveguides into the board (electro-optical circuit board technology, EOCB) have been reported earlier. The principle feasibility of optical interconnection technology in chip-to-chip applications has been validated in a number of projects. For reasons of cost considerations waveguides with large cross sections are used in order to relax alignment requirements and to allow automatic placement and assembly without any active alignment of components necessary. On the other hand the bandwidth of these highly multimodal waveguides is restricted due to mode dispersion. The advance of WDM technology towards intrasystem applications will provide sufficiently high bandwidth which is required for future high-performance computer systems: Assuming that, for example, 8 wavelength-channels with 12Gbps (SDR1) each are given, then optical on-board interconnects with data rates a magnitude higher than the data rates of electrical interconnects for distances typically found at today's computer boards and backplanes can be realized. The data rate will be twice as much, if DDR2 technology is considered towards the optical signals as well. In this paper we discuss an approach for a hybrid integrated optoelectronic WDM package which might enable the application of WDM technology to EOCB.

  13. Coal-fired high performance power generating system. Quarterly progress report

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO{sub x} SO {sub x} and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R&D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO{sub x} production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  14. Determination of the kinetic rate constant of cyclodextrin supramolecular systems by high performance affinity chromatography.

    PubMed

    Li, Haiyan; Ge, Jingwen; Guo, Tao; Yang, Shuo; He, Zhonggui; York, Peter; Sun, Lixin; Xu, Xu; Zhang, Jiwen

    2013-08-30

    It is challenging and extremely difficult to measure the kinetics of supramolecular systems with extensive, weak binding (Ka<10(5)M(-1)), and fast dissociation, such as those composed of cyclodextrins and drugs. In this study, a modified peak profiling method based on high performance affinity chromatography (HPAC) was established to determine the dissociation rate constant of cyclodextrin supramolecular systems. The interactions of β-cyclodextrin with acetaminophen and sertraline were used to exemplify the method. The retention times, variances and the plate heights of the peaks for acetaminophen or sertraline, conventional non-retained substance (H2O) on the β-cyclodextrin bonded column and a control column were determined at four flow rates under linear elution conditions. Then, plate heights for the theoretical non-retained substance were estimated by the modified HPAC method, in consideration of the diffusion and stagnant mobile phase mass transfer. As a result, apparent dissociation rate constants of 1.82 (±0.01)s(-1) and 3.55 (±0.37)s(-1) were estimated for acetaminophen and sertraline respectively at pH 6.8 and 25°C with multiple flow rates. Following subtraction of the non-specific binding with the support, dissociation rate constants were estimated as 1.78 (±0.00) and 1.91 (±0.02)s(-1) for acetaminophen and sertraline, respectively. These results for acetaminophen and sertraline were in good agreement with the magnitude of the rate constants for other drugs determined by capillary electrophoresis reported in the literature and the peak fitting method we performed. The method described in this work is thought to be suitable for other supramolecules, with relatively weak, fast and extensive interactions.

  15. System and method for on demand, vanishing, high performance electronic systems

    SciTech Connect

    Shah, Kedar G.; Pannu, Satinderpall S.

    2016-03-22

    An integrated circuit system having an integrated circuit (IC) component which is able to have its functionality destroyed upon receiving a command signal. The system may involve a substrate with the IC component being supported on the substrate. A module may be disposed in proximity to the IC component. The module may have a cavity and a dissolving compound in a solid form disposed in the cavity. A heater component may be configured to heat the dissolving compound to a point of sublimation where the dissolving compound changes from a solid to a gaseous dissolving compound. A triggering mechanism may be used for initiating a dissolution process whereby the gaseous dissolving compound is allowed to attack the IC component and destroy a functionality of the IC component.

  16. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH-PERFORMANCE POWER SYSTEMS

    SciTech Connect

    1998-11-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2, which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, Al. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. The design of the char burner was completed during this quarter. The burner is designed for arch-firing and has a maximum capacity of 30 MMBtu/hr. This size represents a half scale version of a typical commercial burner. The burner is outfitted with

  17. A multi-layer robust adaptive fault tolerant control system for high performance aircraft

    NASA Astrophysics Data System (ADS)

    Huo, Ying

    Modern high-performance aircraft demand advanced fault-tolerant flight control strategies. Not only the control effector failures, but the aerodynamic type failures like wing-body damages often result in substantially deteriorate performance because of low available redundancy. As a result the remaining control actuators may yield substantially lower maneuvering capabilities which do not authorize the accomplishment of the air-craft's original specified mission. The problem is to solve the control reconfiguration on available control redundancies when the mission modification is urged to save the aircraft. The proposed robust adaptive fault-tolerant control (RAFTC) system consists of a multi-layer reconfigurable flight controller architecture. It contains three layers accounting for different types and levels of failures including sensor, actuator, and fuselage damages. In case of the nominal operation with possible minor failure(s) a standard adaptive controller stands to achieve the control allocation. This is referred to as the first layer, the controller layer. The performance adjustment is accounted for in the second layer, the reference layer, whose role is to adjust the reference model in the controller design with a degraded transit performance. The upmost mission adjust is in the third layer, the mission layer, when the original mission is not feasible with greatly restricted control capabilities. The modified mission is achieved through the optimization of the command signal which guarantees the boundedness of the closed-loop signals. The main distinguishing feature of this layer is the the mission decision property based on the current available resources. The contribution of the research is the multi-layer fault-tolerant architecture that can address the complete failure scenarios and their accommodations in realities. Moreover, the emphasis is on the mission design capabilities which may guarantee the stability of the aircraft with restricted post

  18. Achieving organisational competence for clinical leadership: the role of high performance work systems.

    PubMed

    Leggat, Sandra G; Balding, Cathy

    2013-01-01

    While there has been substantial discussion about the potential for clinical leadership in improving quality and safety in healthcare, there has been little robust study. The purpose of this paper is to present the results of a qualitative study with clinicians and clinician managers to gather opinions on the appropriate content of an educational initiative being planned to improve clinical leadership in quality and safety among medical, nursing and allied health professionals working in primary, community and secondary care. In total, 28 clinicians and clinician managers throughout the state of Victoria, Australia, participated in focus groups to provide advice on the development of a clinical leadership program in quality and safety. An inductive, thematic analysis was completed to enable the themes to emerge from the data. Overwhelmingly the participants conceptualised clinical leadership in relation to organisational factors. Only four individual factors, comprising emotional intelligence, resilience, self-awareness and understanding of other clinical disciplines, were identified as being important for clinical leaders. Conversely seven organisational factors, comprising role clarity and accountability, security and sustainability for clinical leaders, selective recruitment into clinical leadership positions, teamwork and decentralised decision making, training, information sharing, and transformational leadership, were seen as essential, but the participants indicated they were rarely addressed. The human resource management literature includes these seven components, with contingent reward, reduced status distinctions and measurement of management practices, as the essential organisational underpinnings of high performance work systems. The results of this study propose that clinical leadership is an organisational property, suggesting that capability frameworks and educational programs for clinical leadership need a broader organisation focus. The paper

  19. High-performance compact optical WDM transceiver module for passive double star subscriber systems

    NASA Astrophysics Data System (ADS)

    Ikushima, Ichiro; Himi, Susumu; Hamaguchi, Tsuruki; Suzuki, Munetoshi; Maeda, Narimichi; Kodera, Hiroshi; Yamashita, Kiichi

    1995-03-01

    High-performance transceiver-type optical WDM interface modules with a volume of only 36 cc have been developed for PDS subscriber systems. The new module comprises an optical WDM sub-module, hybrid-integrated transmitter and receiver circuits. In the WDM sub-module, a planar lightwave circuit chip was hermetically sealed together with laser and photodiode chips in order to minimize the size of the transceiver module. The lightwave circuit was formed on an optical-waveguide chip by adopting a high-silica based optical-waveguide technology. The circuit has a 3-dB directional coupler for bi-directional transmission with a 1.3-micron wavelength through a single fiber and a wavelength division multiplexer between both 1.3-micron and 1.55-micron wavelengths. The overall characteristics of the fabricated WDM sub-module achieved were a responsitivity of 0.25 +/- 0.05 A/W, an insertion loss approximately 3 dB at 1.55-micron and an isolation of 35 dB between both wavelengths. Optical output power of the fabricated transceiver module was -3.8 dBm. Also, receiver sensitivity of less than -35 dBm with an overload of over -14 dBm were obtained by introducing high-speed automatic gain and threshold control techniques. Thus, an allowable span loss of over 30 dB and an optical dynamic range of over 20 dB were attained. The preamble bit length required to reach stable receiver operation was confirmed to be within three bits.

  20. Towards a System for High-Performance, Multi-Language, Component-Based Modeling

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2008-12-01

    The Community Surface Dynamics Modeling System (CSDMS) is a recently NSF-funded project that represents an effort to bring together a diverse community of surface dynamics modelers and model users. Key goals of the CSDMS project are to (1) promote open-source code sharing and re-use, (2) to develop a review process for code contributions, (3) promote recognition of contributors, (4) develop a "library" of low- level software tools and higher-level models that can be linked as easily as possible into new applications and (5) provide resources to simplify the efforts of surface dynamics modelers. The architectural framework of CSDMS is being designed to allow code contributions to be in any of several different programming languages (language independence), to support a migration towards parallel computation and to support multiple operating systems (platform independence). After evaluating a number of different "coupling frameworks," the CSDMS project has decided to use a DOE- funded set of tools and standards called the Common Component Architecture (CCA) as the foundation for our model-linking efforts. CCA was specifically designed to meet the needs of high-performance, scientific computing. It also includes a powerful, language-interoperability tool called Babel that permits communication between components written in any of several major programming languages, including C, C++, Java, Fortran (all years) and Python. The CSDMS project has been collecting open-source components from our modeling community in all of these languages, including a variety of terrestrial, marine, coastal and hydrological models. CSDMS is now focused on the problem of how best to wrap these components with interfaces that allow them to be linked together with maximum ease and flexibility. To this end, we are adapting a Java version of the OpenMI (Open Modeling Interface) standard and an associated software development kit for use within a CCA framework. Our goal is to combine the best

  1. High performance MRI simulations of motion on multi-GPU systems

    PubMed Central

    2014-01-01

    Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer

  2. High performance MRI simulations of motion on multi-GPU systems.

    PubMed

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  3. Instructional Leadership in Centralised Systems: Evidence from Greek High-Performing Secondary Schools

    ERIC Educational Resources Information Center

    Kaparou, Maria; Bush, Tony

    2015-01-01

    This paper examines the enactment of instructional leadership (IL) in high-performing secondary schools (HPSS), and the relationship between leadership and learning in raising student outcomes and encouraging teachers' professional learning in the highly centralised context of Greece. It reports part of a comparative research study focused on…

  4. Instructional Leadership in Centralised Systems: Evidence from Greek High-Performing Secondary Schools

    ERIC Educational Resources Information Center

    Kaparou, Maria; Bush, Tony

    2015-01-01

    This paper examines the enactment of instructional leadership (IL) in high-performing secondary schools (HPSS), and the relationship between leadership and learning in raising student outcomes and encouraging teachers' professional learning in the highly centralised context of Greece. It reports part of a comparative research study focused on…

  5. Microdialysis based monitoring of subcutaneous interstitial and venous blood glucose in Type 1 diabetic subjects by mid-infrared spectrometry for intensive insulin therapy

    NASA Astrophysics Data System (ADS)

    Heise, H. Michael; Kondepati, Venkata Radhakrishna; Damm, Uwe; Licht, Michael; Feichtner, Franz; Mader, Julia Katharina; Ellmerer, Martin

    2008-02-01

    Implementing strict glycemic control can reduce the risk of serious complications in both diabetic and critically ill patients. For this purpose, many different blood glucose monitoring techniques and insulin infusion strategies have been tested towards the realization of an artificial pancreas under closed loop control. In contrast to competing subcutaneously implanted electrochemical biosensors, microdialysis based systems for sampling body fluids from either the interstitial adipose tissue compartment or from venous blood have been developed, which allow an ex-vivo glucose monitoring by mid-infrared spectrometry. For the first option, a commercially available, subcutaneously inserted CMA 60 microdialysis catheter has been used routinely. The vascular body interface includes a double-lumen venous catheter in combination with whole blood dilution using a heparin solution. The diluted whole blood is transported to a flow-through dialysis cell, where the harvesting of analytes across the microdialysis membrane takes place at high recovery rates. The dialysate is continuously transported to the IR-sensor. Ex-vivo measurements were conducted on type-1 diabetic subjects lasting up to 28 hours. Experiments have shown excellent agreement between the sensor readout and the reference blood glucose concentration values. The simultaneous assessment of dialysis recovery rates renders a reliable quantification of whole blood concentrations of glucose and metabolites (urea, lactate etc) after taking blood dilution into account. Our results from transmission spectrometry indicate, that the developed bed-side device enables reliable long-term glucose monitoring with reagent- and calibration-free operation.

  6. A simple method for evaluating image quality of screen-film system using a high-performance digital camera

    NASA Astrophysics Data System (ADS)

    Fujita, Naotoshi; Yamazaki, Asumi; Ichikawa, Katsuhiro; Kodera, Yoshie

    2009-02-01

    Screen-film systems are used in mammography even now. Therefore, it is important to measure their physical properties such as modulation transfer function (MTF) or noise power spectrum (NPS). The MTF and NPS of screen-film systems are mostly measured by using a microdensitometer. However, since microdensitometers are not commonly used in general hospitals, it is difficult to carry out these measurements regularly. In the past, Ichikawa et al. have measured and evaluated the physical properties of medical liquid crystal displays by using a high-performance digital camera. By this method, the physical properties of screen-film systems can be measured easily without using a microdensitometer. Therefore, we have proposed a simple method for measuring the MTF and NPS of screen-film systems by using a high-performance digital camera. The proposed method is based on the edge method (for evaluating MTF) and the one-dimensional fast Fourier transform (FFT) method (for evaluating NPS), respectively. As a result, the MTF and NPS evaluated by using the high-performance digital camera approximately corresponded with those evaluated by using a microdensitometer. It is possible to substitute the calculation of MTF and NPS by using a high-performance digital camera for that by using a microdensitometer. Further, this method also simplifies the evaluation of the physical properties of screen-film systems.

  7. Silicon photonics-based laser system for high performance fiber sensing

    NASA Astrophysics Data System (ADS)

    Ayotte, S.; Faucher, D.; Babin, A.; Costin, F.; Latrasse, C.; Poulin, M.; G.-Deschênes, É.; Pelletier, F.; Laliberté, M.

    2015-09-01

    We present a compact four-laser source based on low-noise, high-bandwidth Pound-Drever-Hall method and optical phase-locked loops for sensing narrow spectral features. Four semiconductor external cavity lasers in butterfly packages are mounted on a shared electronics control board and all other optical functions are integrated on a single silicon photonics chip. This high performance source is compact, automated, robust, operates over a wide temperature range and remains locked for days. A laser to resonance frequency noise of 0.25 Hz/rt-Hz is demonstrated.

  8. Relationships of Cognitive and Metacognitive Learning Strategies to Mathematics Achievement in Four High-Performing East Asian Education Systems

    ERIC Educational Resources Information Center

    Areepattamannil, Shaljan; Caleon, Imelda S.

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education…

  9. Constructing a LabVIEW-Controlled High-Performance Liquid Chromatography (HPLC) System: An Undergraduate Instrumental Methods Exercise

    ERIC Educational Resources Information Center

    Smith, Eugene T.; Hill, Marc

    2011-01-01

    In this laboratory exercise, students develop a LabVIEW-controlled high-performance liquid chromatography system utilizing a data acquisition device, two pumps, a detector, and fraction collector. The programming experience involves a variety of methods for interface communication, including serial control, analog-to-digital conversion, and…

  10. Relationships of Cognitive and Metacognitive Learning Strategies to Mathematics Achievement in Four High-Performing East Asian Education Systems

    ERIC Educational Resources Information Center

    Areepattamannil, Shaljan; Caleon, Imelda S.

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education…

  11. Constructing a LabVIEW-Controlled High-Performance Liquid Chromatography (HPLC) System: An Undergraduate Instrumental Methods Exercise

    ERIC Educational Resources Information Center

    Smith, Eugene T.; Hill, Marc

    2011-01-01

    In this laboratory exercise, students develop a LabVIEW-controlled high-performance liquid chromatography system utilizing a data acquisition device, two pumps, a detector, and fraction collector. The programming experience involves a variety of methods for interface communication, including serial control, analog-to-digital conversion, and…

  12. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    NASA Astrophysics Data System (ADS)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will

  13. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    PubMed Central

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-01-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations. PMID:27220270

  14. HPTLC-aptastaining - Innovative protein detection system for high-performance thin-layer chromatography

    NASA Astrophysics Data System (ADS)

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-05-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.

  15. Rapid prenatal diagnosis of spinal muscular atrophy by denaturing high-performance liquid chromatography system.

    PubMed

    Shaw, Sheng-Wen; Cheng, Po-Jen; Chang, Shuenn-Dhy; Lin, Yu-Ting; Hung, Chia-Cheng; Chen, Chih-Ping; Su, Yi-Ning

    2008-01-01

    Use of Denaturing High-Performance Liquid Chromatography (DHPLC) in prenatal diagnosis of spinal muscular atrophy (SMA). Thirty-three members of 7 families participated in carrier test and disease detection of SMA. Prenatal genetic diagnosis was performed if both parents were carriers or any family members had SMA. DNA extracted from blood, chorionic villi and amniotic fluid was amplified and used for DHPLC. Twenty SMA carriers, seven SMA affected cases, and six normal individuals were identified. SMA status was demonstrated by genotyping and total copy number determinations of SMN1 and SMN2. Families 1-3 were classified as group one (SMA affecting previously born child). Group two, comprising families 4 and 5, had lost a child due to an unknown muscular disease. Group three (SMA-affected parent) comprised families 6 and 7; carrier testing was done. DHPLC prenatal genetic diagnosis was made in seven pregnancies, one in each family (affected, n=2; carrier, n=3; normal, n=2). Pregnancy was terminated for the two affected fetuses. The others were delivered uneventfully and SMA free. DHPLC prenatal diagnosis of SMA and determination of SMA status in adults is possible, and SMN1 and SMN2 copy numbers can be determined.

  16. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  17. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  18. High-performance work systems in health care, part 3: the role of the business case.

    PubMed

    Song, Paula H; Robbins, Julie; Garman, Andrew N; McAlearney, Ann Scheck

    2012-01-01

    Growing evidence suggests the systematic use of high-performance work practices (HPWPs), or evidence-based management practices, holds promise to improve organizational performance, including improved quality and efficiency, in health care organizations. However, little is understood about the investment required for HPWP implementation, nor the business case for HPWP investment. The aim of this study is to enhance our understanding about organizations' perspectives of the business case for HPWP investment, including reasons for and approaches to evaluating that investment. We used a multicase study approach to explore the business case for HPWPs in U.S. health care organizations. We conducted semistructured interviews with 67 key informants across five sites. All interviews were recorded, transcribed, and subjected to qualitative analysis using both deductive and inductive methods. The organizations in our study did not appear to have explicit financial return expectations for investments in HPWPs. Instead, the HPWP investment was viewed as an important factor contributing to successful execution of the organization's strategic priorities and a means for competitive differentiation in the market. Informants' characterizations of the HPWP investment did not involve financial terms; rather, descriptions of these investments as redeployment of existing resources or a shift of managerial time redirected attention from cost considerations. Evaluation efforts were rare, with organizations using broad organizational metrics to justify HPWP investment or avoiding formal evaluation altogether. Our findings are consistent with prior studies that have found that health care organizations have not systematically evaluated the financial outcomes of their quality-related initiatives or tend to forget formal business case analysis for investments they may perceive as "inevitable." In the absence of a clearly described association between HPWPs and outcomes or some other external

  19. Coal-fired high performance power generating system. Draft quarterly progress report, January 1--March 31, 1995

    SciTech Connect

    1995-10-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal-Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x} and particulates {le} 25% NSPS; cost {ge}65% of heat input; all solid wastes benign. A crucial aspect of the authors design is the integration of the gas turbine requirements with the HITAF output and steam cycle requirements. In order to take full advantage of modern highly efficient aeroderivative gas turbines they have carried out a large number of cycle calculations to optimize their commercial plant designs for both greenfield and repowering applications.

  20. Coal-fired high performance power generating system. Quarterly progress report, July 1, 1993--September 30, 1993

    SciTech Connect

    Not Available

    1993-12-31

    This report covers work carried out under Task 3, Preliminary Research and Development, and Task 4, Commercial Generating Plant Design, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x}, and particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. The report discusses progress in cycle analysis, chemical reactor modeling, ash deposition rate calculations for HITAF (high temperature advanced furnace) convective air heater, air heater materials, and deposit initiation and growth on ceramic substrates.

  1. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A [Ridgefield, CT; Salapura, Valentina [Chappaqua, NY

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  2. High Performance MG-System Alloys For Weight Saving Applications: First Year Results From The Green Metallurgy EU Project

    NASA Astrophysics Data System (ADS)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Hofer, Markus; Kim, Shae K.

    The GREEN METALLURGY Project, a LIFE+ project co-financed by the EU Commission, has just concluded its first year. The Project seeks to set manufacturing processes at a pre-industrial scale for nanostructured-based high-performance Mg-Zn(Y) magnesium alloys. The Project's goal is the reduction of specific energy consumed and the overall carbon-footprint produced in the cradle-to-exit gate phases. Preliminary results addressed potentialities of the upstream manufacturing process pathway. Two Mg-Zn(Y) system alloys with rapid solidifying powders have been produced and directly extruded for 100% densification. Examination of the mechanical properties showed that such materials exhibit strength and elongation comparable to several high performing aluminum alloys; 390 MPa and 440 MPa for the average UTS for two different system alloys, and 10% and 15% elongations for two system alloys. These results, together with the low-environmental impact targeted, make these novel Mg alloys competitive as lightweight high-performance materials for automotive components.

  3. Hierarchical rapid modeling and simulation of high-performance picture archive and communications systems

    NASA Astrophysics Data System (ADS)

    Anderson, Kenneth R.; Meredith, Glenn; Prior, Fred W.; Wirsz, Emil; Wilson, Dennis L.

    1992-07-01

    Due to the expense and time required to configure and evaluate large scale PACS rapid modeling and simulation of system configurations is critical. The results of the analysis can be used to drive the design of both hardware and software. System designers can use the models to help them during the actual system integration. This paper will show how the LANNET 11. 5 and NE1''WORK 11. 5 modeling tools can be used hierarchically to model and simulate large PACS. The detailed description of the Communication Network model which is one of three models used for the Medical Diagnostic Imaging Support System (MDIS) design analysis will be presented. The paper will conclude with future issues in the modeling of MDIS and other large heterogeneous networks of computers and workstations. The way that the models might be used throughout the system life cycle to reduce the operation and maintenance costs of the system is explained.

  4. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    SciTech Connect

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doors for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.

  5. Development of Nano-structured Electrode Materials for High Performance Energy Storage System

    NASA Astrophysics Data System (ADS)

    Huang, Zhendong

    Systematic studies have been done to develop a low cost, environmental-friendly facile fabrication process for the preparation of high performance nanostructured electrode materials and to fully understand the influence factors on the electrochemical performance in the application of lithium ion batteries (LIBs) or supercapacitors. For LIBs, LiNi1/3Co1/3Mn1/3O2 (NCM) with a 1D porous structure has been developed as cathode material. The tube-like 1D structure consists of inter-linked, multi-facet nanoparticles of approximately 100-500nm in diameter. The microscopically porous structure originates from the honeycomb-shaped precursor foaming gel, which serves as self-template during the stepwise calcination process. The 1D NCM presents specific capacities of 153, 140, 130 and 118mAh·g-1 at current densities of 0.1C, 0.5C, 1C and 2C, respectively. Subsequently, a novel stepwise crystallization process consisting of a higher crystallization temperature and longer period for grain growth is employed to prepare single crystal NCM nanoparticles. The modified sol-gel process followed by optimized crystallization process results in significant improvements in chemical and physical characteristics of the NCM particles. They include a fully-developed single crystal NCM with uniform composition and a porous NCM architecture with a reduced degree of fusion and a large specific surface area. The NCM cathode material with these structural modifications in turn presents significantly enhanced specific capacities of 173.9, 166.9, 158.3 and 142.3mAh·g -1 at 0.1C, 0.5C, 1C and 2C, respectively. Carbon nanotube (CNT) is used to improve the relative low power capability and poor cyclic stability of NCM caused by its poor electrical conductivity. The NCM/CNT nanocomposites cathodes are prepared through simply mixing of the two component materials followed by a thermal treatment. The CNTs were functionalized to obtain uniformly-dispersed MWCNTs in the NCM matrix. The electrochemical

  6. Demonstration and Validation of Two Coat High Performance Coating System for Steel Structures in Corrosive Environments

    DTIC Science & Technology

    2016-12-01

    Comparison of material costs for two-coat and three-coat systems for projects using less than 200 gallons...18 Table 6. Comparison of material costs for two-coat and three-coat systems for projects using more than 200...the Materials and Structures Branch (CEERD- CFM of the Facilities Division (CEERD-CF), Engineer Research and Devel- opment Center–Construction

  7. Teacher and School Leader Effectiveness: Lessons Learned from High-Performing Systems. Issue Brief

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2011

    2011-01-01

    In an effort to find best practices in enhancing teacher effectiveness, the Alliance for Excellent Education (Alliance) and the Stanford Center for Opportunity Policy in Education (SCOPE) looked abroad at education systems that appear to have well-developed and effective systems for recruiting, preparing, developing, and retaining teachers and…

  8. Multi-Core Technology for and Fault Tolerant High-Performance Spacecraft Computer Systems

    NASA Astrophysics Data System (ADS)

    Behr, Peter M.; Haulsen, Ivo; Van Kampenhout, J. Reinier; Pletner, Samuel

    2012-08-01

    The current architectural trends in the field of multi-core processors can provide an enormous increase in processing power by exploiting the parallelism available in many applications. In particular because of their high energy efficiency, it is obvious that multi-core processor-based systems will also be used in future space missions. In this paper we present the system architecture of a powerful optical sensor system based on the eight core multi-core processor P4080 from Freescale. The fault tolerant structure and the highly effective FDIR concepts implemented on different hardware and software levels of the system are described in detail. The space application scenario and thus the main requirements for the sensor system have been defined by a complex tracking sensor application for autonomous landing or docking manoeuvres.

  9. A high performance imagery system for unattended ground sensor tactical deployments

    NASA Astrophysics Data System (ADS)

    Hartup, David C.; Bobier, Kevin; Marks, Brian A.; Dirr, William J.; Salisbury, Richard; Brown, Alistair; Cairnduff, Bruce

    2006-05-01

    Modern Unattended Ground Sensor (UGS) systems require transmission of high quality imagery to a remote location while meeting severe operational constraints such as extended mission life using battery operation. This paper describes a robust imagery system that provides excellent performance for both long range and short range stand-off scenarios. The imaging devices include a joint EO and IR solution that features low power consumption, quick turn-on time, high resolution images, advanced AGC and exposure control algorithms, digital zoom, and compact packaging. Intelligent camera operation is provided by the System Controller, which allows fusion of multiple sensor inputs and intelligent target recognition. The System Controller's communications package is interoperable with all SEIWG-005 compliant sensors. Image transmission is provided via VHF, UHF, or SATCOM links. The system has undergone testing at Yuma Proving Ground and Ft. Huachuca, as well as extensive company testing. Results from these field tests are given.

  10. Modeling and simulation of a high-performance PACS based on a shared file system architecture

    NASA Astrophysics Data System (ADS)

    Meredith, Glenn; Anderson, Kenneth R.; Wirsz, Emil; Prior, Fred W.; Wilson, Dennis L.

    1992-07-01

    Siemens and Loral Western Development Labs have designed a Picture Archiving and Communication System capable of supporting a large, fully digital hospital. Its functions include the management, storage and retrieval of medical images. The system may be modeled as a heterogeneous network of processing elements, transfer devices and storage units. Several discrete event simulation models have been designed to investigate different levels of the design. These models include the System Model, focusing on the flow of image traffic throughout the system, the Workstation Models, focusing on the internal processing in the different types of workstations, and the Communication Network Model, focusing on the control communication and host computer processing. The first two of these models are addressed here, with reference being made to a separate paper regarding the Communication Network Model. This paper describes some of the issues addressed with the models, the modeling techniques used and the performance results from the simulations. Important parameters of interest include: time to retrieve images from different possible storage locations and the utilization levels of the transfer devices and other key hardware components. To understand system performance under fully loaded conditions, the proposed system for the Madigan Army Medical Center was modeled in detail, as part of the Medical Diagnostic Imaging Support System (MDIS) proposal.

  11. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  12. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  13. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    SciTech Connect

    Tan, Li; Chen, Zizhong; Song, Shuaiwen Leon

    2015-11-16

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  14. Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology

    SciTech Connect

    Tan, Li; Chen, Zizhong; Song, Shuaiwen

    2016-01-18

    Energy efficiency and resilience are two crucial challenges for HPC systems to reach exascale. While energy efficiency and resilience issues have been extensively studied individually, little has been done to understand the interplay between energy efficiency and resilience for HPC systems. Decreasing the supply voltage associated with a given operating frequency for processors and other CMOS-based components can significantly reduce power consumption. However, this often raises system failure rates and consequently increases application execution time. In this work, we present an energy saving undervolting approach that leverages the mainstream resilience techniques to tolerate the increased failures caused by undervolting.

  15. High performance CCD camera system for digitalisation of 2D DIGE gels.

    PubMed

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. High performance CCD camera system for digitalisation of 2D DIGE gels

    PubMed Central

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd

    2016-01-01

    An essential step in 2D DIGE‐based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge‐coupled device (CCD) camera‐based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high‐end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. PMID:27252121

  17. High performance file compression algorithm for video-on-demand e-learning system

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2005-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene: recognizing the a lecturer and a lecture stick by pattern recognition techniques, the video-image compression processing system deletes the figure of a lecturer of low importance and displays only the end point of a lecture stick. It enables us to create the highly compressed lecture video files, which are suitable for the Internet distribution. We compare this technique with the other simple methods such as the lower frame-rate video files, and the ordinary MPEG files. The experimental result shows that the proposed compression processing system is much more effective than the others.

  18. High-performance sub-terahertz transmission imaging system for food inspection

    PubMed Central

    Ok, Gyeongsik; Park, Kisang; Chun, Hyang Sook; Chang, Hyun-Joo; Lee, Nari; Choi, Sung-Wook

    2015-01-01

    Unlike X-ray systems, a terahertz imaging system can distinguish low-density materials in a food matrix. For applying this technique to food inspection, imaging resolution and acquisition speed ought to be simultaneously enhanced. Therefore, we have developed the first continuous-wave sub-terahertz transmission imaging system with a polygonal mirror. Using an f-theta lens and a polygonal mirror, beam scanning is performed over a range of 150 mm. For obtaining transmission images, the line-beam is incorporated with sample translation. The imaging system demonstrates that a pattern with 2.83 mm line-width at 210 GHz can be identified with a scanning speed of 80 mm/s. PMID:26137392

  19. An ultralightweight, evacuated, load-bearing, high-performance insulation system. [for cryogenic propellant tanks

    NASA Technical Reports Server (NTRS)

    Parmley, R. T.; Cunnington, G. R., Jr.

    1978-01-01

    A new hollow-glass microsphere insulation and a flexible stainless-steel vacuum jacket were demonstrated on a flight-weight cryogenic test tank, 1.17 m in diameter. The weight of the system is three times lighter than the most advanced vacuum-jacketed design demonstrated to date, a free-standing honeycomb hard shell with a multilayer insulation system (for a Space Tug application). Design characteristics of the flexible vacuum jacket are presented along with a model describing the insulation thermal performance as a function of boundary temperatures and emittance, compressive load on the insulation and insulation gas pressure. Test data are compared with model predictions and with prior flat-plate calorimeter test results. Potential applications for this insulation system or a derivative of this system include the cryogenic Space Tug, the Single-Stage-to-Orbit Space Shuttle, LH2 fueled subsonic and hypersonic aircraft, and LNG applications.

  20. Analytical design of a high performance stability and control augmentation system for a hingeless rotor helicopter

    NASA Technical Reports Server (NTRS)

    Miyajima, K.

    1978-01-01

    A stability and control augmentation system (SCAS) was designed based on a set of comprehensive performance criteria. Linear optimal control theory was applied to determine appropriate feedback gains for the stability augmentation system (SAS). The helicopter was represented by six-degree-of-freedom rigid body equations of motion and constant factors were used as weightings for state and control variables. The ratio of these factors was employed as a parameter for SAS analysis and values of the feedback gains were selected on this basis to satisfy three of the performance criteria for full and partial state feedback systems. A least squares design method was then applied to determine control augmentation system (CAS) cross feed gains to satisfy the remaining seven performance criteria. The SCAS gains were then evaluated by nine degree-of-freedom equations which include flapping motion and conclusions drawn concerning the necessity of including the pitch/regressing and roll/regressing modes in SCAS analyses.

  1. HIPERCIR: a low-cost high-performance 3D radiology image analysis system

    NASA Astrophysics Data System (ADS)

    Blanquer, Ignacio; Hernandez, Vincente; Ramirez, Javier; Vidal, Antonio M.; Alcaniz-Raya, Mariano L.; Grau Colomer, Vincente; Monserrat, Carlos A.; Concepcion, Luis; Marti-Bonmati, Luis

    1999-07-01

    Clinics have to deal currently with hundreds of 3D images a day. The processing and visualization using currently affordable systems is very costly and slow. The present work shows the features of a software integrated parallel computing package developed at the Universidad Politecnica de Valencia (UPV), under the European Project HIPERCIR, which is aimed at reducing the time and requirements for processing and visualizing the 3D images with low-cost solutions, such as networks of PCs running standard operating systems. HIPERCIR is targeted to Radiology Departments of Hospitals and Radiology System Providers to provide them with a tool for easing the day-to-day diagnosis. This project is being developed by a consortium formed by medical image processing and parallel computing experts from the Computing Systems Department of the UPV, experts on biomedical software and radiology and tomography clinic experts.

  2. Resource-Efficient Data-Intensive System Designs for High Performance and Capacity

    DTIC Science & Technology

    2015-09-01

    storage technologies have blossomed, raising questions for system balance. SSD (solid- state drives) based on flash first became practical in the...memory efficiency and garbage collection often require data layout changes on flash. The system designer should be able to select an appropriate...several HashStores along with an older version of a SortedStore and forms a new SortedStore, garbage collecting deleted or overwritten keys in the

  3. A Heterogeneous High-Performance System for Computational and Computer Science

    DTIC Science & Technology

    2016-11-15

    session of the workshop, the attendees were exposed to the usage of the supercomputer. They all were able to log on the system and learned how to...successful and efficient information system applications such as GIS, gene expression analysis, social network modeling, and multimedia information...support from the DNA Learning Center, Cold Spring Harbor Laboratory, NY, USA. The barcode sequence data generated from plants, animals, fungi and

  4. Chaining for Flexible and High-Performance Key-Value Systems

    DTIC Science & Technology

    2012-09-01

    con- ference on Symposium on Networked Systems Design and Implementation - Volume 1, NSDI’04, pages 13 – 13 , Berkeley, CA, USA, 2004. USENIX...11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13 ...strong data consistency. We use Ouroboros to implement a distributed key-value system, FAWN-KV designed with the goal of supporting the three key

  5. Energy Performance Testing of Asetek's RackCDU System at NREL's High Performance Computing Data Center

    SciTech Connect

    Sickinger, D.; Van Geet, O.; Ravenscroft, C.

    2014-11-01

    In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89 degrees F, and 48% at outlet temperatures approaching 100 degrees F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks.

  6. A high-performance miniaturized time division multiplexed sensor system for remote structural health monitoring

    NASA Astrophysics Data System (ADS)

    Lloyd, Glynn D.; Everall, Lorna A.; Sugden, Kate; Bennion, Ian

    2004-09-01

    We report for the first time the design, implementation and commercial application of a hand-held optical time division multiplexed, distributed fibre Bragg grating sensor system. A unique combination of state-of-the art electronic and optical components enables system miniaturization whilst maintaining exceptional performance. Supporting more than 100 low-cost sensors per channel, the battery-powered system operates remotely via a wireless GSM link, making it ideal for real-time structural health monitoring in harsh environments. Driven by highly configurable timing electronics, an off-the-shelf telecommunications semiconductor optical amplifier performs combined amplification and gating. This novel optical configuration boasts a spatial resolution of less than 20cm and an optical signal to noise ratio of better than 30dB, yet utilizes sensors with reflectivity of only a few percent and does not require RF speed signal processing devices. This paper highlights the performance and cost advantages of a system that utilizes TDM-style mass manufactured commodity FBGs. Created in continual lengths, these sensors reduce stock inventory, eradicate application-specific array design and simplify system installation and expansion. System analysis from commercial installations in oil exploration, wind energy and vibration measurement will be presented, with results showing kilohertz interrogation speed and microstrain resolution.

  7. Design and test of high performance composite tubes for use in deep water drilling and production systems

    NASA Astrophysics Data System (ADS)

    Odru, Pierre; Massonpierre, Yves

    1987-10-01

    High performance composite tubes to be used as marine risers, in deepwater drilling or in production systems were developed. They are composed of several layers with independant functions. Structural layers made of high resistance fibers set in a resin matrix, are filament wound and consist of circumferential layers, perpendicular to the tube axis, to resist bursting stresses, and longitudinal layers, helically wound, to resist axial forces. The tubes are completed with internal and external liners and are terminated at extremities by steel end pieces to which the composite layers are carefully bonded. The concept of high performance composite tubes is described, including their end fittings. Tests were carried out to verify and improve the properties of the pipes, in ultimate conditions (burst pressure up to 170 MPa, ultimate tensile, collapse), as well as fatigue and aging. Results are satisfactory and real applications are envisaged.

  8. High performance work systems and employee well-being: a two stage study of a rural Australian hospital.

    PubMed

    Young, Suzanne; Bartram, Timothy; Stanton, Pauline; Leggat, Sandra G

    2010-01-01

    This paper aims to explore the attitudes of managers and employees to high performance work practices (HPWS) in a medium sized rural Australian hospital. The study consists of two stages. Stage one involved a qualitative investigation consisting of interviews and focus group sessions with senior, middle and line management at the hospital. Bowen and Ostroffs framework was used to examine how strategic HRM was understood, interpreted and operationalised across the management hierarchy. Stage one investigates the views of managers concerning the implementation of strategic HRM/HPWS. Stage two consisted of a questionnaire administered to all hospital employees. The mediation effects of social identification on the relationship between high performance work systems and affective commitment and job satisfaction are examined. The purpose of stage two was to investigate the views and effects of SHRM/HPWS on employees. It should be noted that HPWS and strategic HRM are used inter-changeably in this paper. At the management level the importance of distinctiveness, consistency and consensus in the interpretation of strategic HRM/HPWS practices across the organization was discovered. Findings indicate that social identification mediates the relationship between HPWS and affective commitment and also mediates the relationship between HPWS and job satisfaction. High performance work systems may play a crucial role facilitating social identification at the unit level. Such practices and management support is likely to provide benefits in terms of high performing committed employees. The paper argues that team leaders and managers play a key role in building social identification within the team and that organizations need to understand this role and provide recognition, reward, education and support to their middle and lower managers.

  9. Design and Integration for High Performance Robotic Systems Based on Decomposition and Hybridization Approaches

    PubMed Central

    Zhang, Dan; Wei, Bin

    2017-01-01

    Currently, the uses of robotics are limited with respect to performance capabilities. Improving the performance of robotic mechanisms is and still will be the main research topic in the next decade. In this paper, design and integration for improving performance of robotic systems are achieved through three different approaches, i.e., structure synthesis design approach, dynamic balancing approach, and adaptive control approach. The purpose of robotic mechanism structure synthesis design is to propose certain mechanism that has better kinematic and dynamic performance as compared to the old ones. For the dynamic balancing design approach, it is normally accomplished based on employing counterweights or counter-rotations. The potential issue is that more weight and inertia will be included in the system. Here, reactionless based on the reconfiguration concept is put forward, which can address the mentioned problem. With the mechanism reconfiguration, the control system needs to be adapted thereafter. One way to address control system adaptation is by applying the “divide and conquer” methodology. It entails modularizing the functionalities: breaking up the control functions into small functional modules, and from those modules assembling the control system according to the changing needs of the mechanism. PMID:28075360

  10. High performance liquid chromatography of selected alkaloids in ion-exchange systems.

    PubMed

    Petruczynik, Anna; Waksmundzka-Hajnos, Monika

    2013-10-11

    A HPLC procedure on strong cation exchange column (SCX) has been developed for the analysis of selected alkaloids from different chemical groups. The retention, separation selectivity, symmetry of peaks and system efficiency were examined in different eluent systems containing different types or concentrations of buffers at various pH and the addition of organic modifiers: methanol (MeOH), acetonitrile (CH3CN), tetrahydrofuran (THF) or dioxane (Dx). The retention factors as the function of the concentration of buffers, the mobile phase pH and the percentage of modifier in the eluents were investigated. More symmetrical peaks and the highest theoretical plate number were obtained in eluents containing acetonitrile or tetrahydrofuran. In most cases, the increase of buffer concentration caused the decrease of alkaloids' retention, the improvement of peaks' symmetry and the increase of theoretical plate number. The improved peak symmetry and the efficiency of system for most investigated alkaloids were observed in the systems containing buffers at strongly acidic pH. The obtained results also reveal a large influence of salt cation used for buffer preparation. The results obtained on SCX column were compared with those obtained on a C18 column. The most efficient and selective systems were used for the separation of alkaloid standard mixture. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. A High Performance Sample Delivery System for Closed-Path Eddy Covariance Measurements

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Leggett, Graham; Alstad, Karrin; Wahl, Edward

    2016-04-01

    The Picarro G2311-f Cavity Ring-Down Spectrometer (CRDS) measures CO2, CH4 and water vapor at high frequency with parts-per-billion (ppb) sensitivity for eddy covariance, gradient, eddy accumulation measurements. In flux mode, the analyzer measures the concentration of all three species at 10 Hz with a cavity gas exchange time of 5 Hz. We developed an enhanced pneumatic sample delivery system for drawing air from the atmosphere into the cavity. The new sample delivery system maintains a 5 Hz gas exchange time, and allows for longer sample intake lines to be configured in tall tower applications (> 250 ft line at sea level). We quantified the system performance in terms of vacuum pump head room and 10-90% concentration step response for several intake line lengths at various elevations. Sample eddy covariance data are shown from an alfalfa field in Northern California, USA.

  12. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  13. Building America Best Practices Series, Volume 6: High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Ruiz, Kathleen A.; Steward, Heidi E.; Love, Pat M.

    2007-06-04

    This guide is was written by PNNL for the US Department of Energy's Building America program to provide information for residential production builders interested in building near zero energy homes. The guide provides indepth descriptions of various roof-top photovoltaic power generating systems for homes. The guide also provides extensive information on various designs of solar thermal water heating systems for homes. The guide also provides construction company owners and managers with an understanding of how solar technologies can be added to their homes in a way that is cost effective, practical, and marketable. Twelve case studies provide examples of production builders across the United States who are building energy-efficient homes with photovoltaic or solar water heating systems.

  14. [Effect of high performance liquid chromatographic instrument system on the analysis of erythromycin A oxime].

    PubMed

    Sun, Jing-gu; Yao, Guo-wei; Ou, Yu-xiang

    2004-09-01

    A HPLC chromatography for the determination of erythromycin A oxime and relative compounds was studied, and the effect of chromatography systems including a HITACHI L-7100, a Shimadzu LC-6A, a Waters 474 and relative columns was analyzed. It was revealed that different HPLC apparatus and columns have obvious impact on the peak separation and retention time under the general chromatographic condition. The suitable chromatographic conditions for several different chromatography systems were summarized with good linear relationship, which is very significant to the quality control of erythromycin A oxime and relative compounds.

  15. Damage-mitigating control of space propulsion systems for high performance and extended life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang; Dai, Xiaowen; Carpino, Marc; Lorenzo, Carl F.

    1993-01-01

    Calculations are presented showing that a substantial improvement in service life of a reusable rocket engine can be achieved by an insignificant reduction in the system dynamic performance. The paper introduces the concept of damage mitigation and formulates a continuous-time model of fatigue damage dynamics. For control of complex mechanical systems, damage prediction and damage mitigation are carried out based on the available sensory and operational information such that the plant can be inexpensively maintained and safely and efficiently steered under diverse operating conditions. The results of simulation experiments are presented for transient operations of a reusable rocket engine.

  16. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  17. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  18. A bioinspired, reusable, paper-based system for high-performance large-scale evaporation.

    PubMed

    Liu, Yanming; Yu, Shengtao; Feng, Rui; Bernard, Antoine; Liu, Yang; Zhang, Yao; Duan, Haoze; Shang, Wen; Tao, Peng; Song, Chengyi; Deng, Tao

    2015-05-06

    A bioinspired, reusable, paper-based gold-nanoparticle film is fabricated by depositing an as-prepared gold-nanoparticle thin film on airlaid paper. This paper-based system with enhanced surface roughness and low thermal conductivity exhibits increased efficiency of evaporation, scale-up potential, and proven reusability. It is also demonstrated to be potentially useful in seawater desalination.

  19. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  20. Analysis of a magnetically suspended, high-performance instrument pointing system

    NASA Technical Reports Server (NTRS)

    Joshi, S. M.

    1978-01-01

    This paper describes a highly accurate auxiliary instrument pointing system which can provide fine pointing for a variety of solar-, stellar-, and Earth-viewing scientific instruments during extended space shuttle orbital missions. This system, called the Annular Suspension and Pointing System (ASPS), consists of pointing assemblies for coarse and vernier pointing. The 'coarse' assembly is attached to the spacecraft (e.g., the space shuttle) and consists of an elevation gimbal and a lateral gimbal to provide coarse pointing. The vernier pointing assembly consists of the payload instrument mounted on a plate around which is attached a continuous annular rim. The vernier assembly is suspended in the lateral gimbal using magnetic actuators which provide rim suspension forces and fine pointing torques. A detailed linearized mathematical model is developed for the ASPS/space shuttle system, and control laws and payload attitude state estimators are designed. Statistical pointing performance is predicted in the presence of stochastic disturbances such as crew motion, sensor noise, and actuator noise.

  1. Building High-Performing and Improving Education Systems: Quality Assurance and Accountability. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    Monitoring, evaluation, and quality assurance in their various forms are seen as being one of the foundation stones of high-quality education systems. De Grauwe, writing about "school supervision" in four African countries in 2001, linked the decline in the quality of basic education to the cut in resources for supervision and support.…

  2. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  3. Aim Higher: Lofty Goals and an Aligned System Keep a High Performer on Top

    ERIC Educational Resources Information Center

    McCommons, David P.

    2014-01-01

    Every school district is feeling the pressure to ensure higher academic achievement for all students. A focus on professional learning for an administrative team not only improves student learning and achievement, but also assists in developing a systemic approach for continued success. This is how the Fox Chapel Area School District in…

  4. Aim Higher: Lofty Goals and an Aligned System Keep a High Performer on Top

    ERIC Educational Resources Information Center

    McCommons, David P.

    2014-01-01

    Every school district is feeling the pressure to ensure higher academic achievement for all students. A focus on professional learning for an administrative team not only improves student learning and achievement, but also assists in developing a systemic approach for continued success. This is how the Fox Chapel Area School District in…

  5. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  6. Rewarding high performers--the pay-for-performance system at Heritage Dental Center.

    PubMed

    Kohen, J

    2001-01-01

    As it has in many businesses in the U.S., a system rewarding exceptional performance has proven to be successful in the dental industry. The effective staff incentive program in place at Heritage Dental Center provides benefits for all members of the dental team.

  7. On High Performance of Updates within an Efficient Document Retrieval System.

    ERIC Educational Resources Information Center

    Motzkin, D.

    1994-01-01

    Describes fast, dynamic update algorithms for document retrieval systems. B-trees are discussed; the M-B-T file directory structure is explained; insertions and deletions in inverted files are described; and performance evaluation is discussed. An appendix provides the formal definition of an M-B-T directory. (Contains 18 references.) (LRW)

  8. High performance computing in biology: multimillion atom simulations of nanoscale systems.

    PubMed

    Sanbonmatsu, K Y; Tung, C-S

    2007-03-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nano-scale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail.

  9. High Performance Computing and Enabling Technologies for Nano and Bio Systems and Interfaces

    DTIC Science & Technology

    2014-12-12

    50 Nanoparticle -incorporation and aggregation in cylyndrical polymer micelles...computational modeling. Experimentally, the aptamer (anti-MUC1 S2.2) has been identified as a breast cancer biomarker mucin 1 (MUC1) protein. However, within...peptide-aptamer systems consisting of MUC1 (APDTRPAP) and MUC1-G (APDTRPAPG) peptides with the anti-MUC1 aptamer under similar physiological

  10. Knowledge Work Supervision: Transforming School Systems into High Performing Learning Organizations.

    ERIC Educational Resources Information Center

    Duffy, Francis M.

    1997-01-01

    This article describes a new supervision model conceived to help a school system redesign its anatomy (structures), physiology (flow of information and webs of relationships), and psychology (beliefs and values). The new paradigm (Knowledge Work Supervision) was constructed by reviewing the practices of several interrelated areas: sociotechnical…

  11. Isolation, pointing, and suppression (IPS) system for high-performance spacecraft

    NASA Astrophysics Data System (ADS)

    Hindle, Tim; Davis, Torey; Fischer, Jim

    2007-04-01

    Passive mechanical isolation is often times the first step taken to remedy vibration issues on-board a spacecraft. In many cases, this is done with a hexapod of axial members or struts to obtain the desired passive isolation in all six degrees-of-freedom (DOF). In some instances, where the disturbance sources are excessive or the payload is particularly sensitive to vibration, additional steps are taken to improve the performance beyond that of passive isolation. Additional performance or functionality can be obtained with the addition of active control, using a hexapod of hybrid (passive/active) elements at the interface between the payload and the bus. This paper describes Honeywell's Isolation, Pointing, and Suppression (IPS) system. It is a hybrid isolation system designed to isolate a sensitive spacecraft payload with very low passive resonant break frequencies while affording agile independent payload pointing, on-board payload disturbance rejection, and active isolation augmentation. This system is an extension of the work done on Honeywell's previous Vibration Isolation, Steering, and Suppression (VISS) flight experiment. Besides being designed for a different size payload than VISS, the IPS strut includes a dual-stage voice coil design for improved dynamic range as well as improved low-noise drive electronics. In addition, the IPS struts include integral load cells, gap sensors, and payloadside accelerometers for control and telemetry purposes. The associated system-level control architecture to accomplish these tasks is also new for this program as compared to VISS. A summary of the IPS system, including analysis and hardware design, build, and single axis bipod testing will be reviewed.

  12. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  13. Building-Wide, Adaptive Energy Management Systems for High-Performance Buildings: Final CRADA Report

    SciTech Connect

    Zavala, Victor M.

    2016-10-27

    Development and field demonstration of the minimum ratio policy for occupancy-driven, predictive control of outdoor air ventilation. Technology transfer of Argonne’s methods for occupancy estimation and forecasting and for M&V to BuildingIQ for their deployment. Selection of CO2 sensing as the currently best-available technology for occupancy-driven controls. Accelerated restart capability for the commercial BuildingIQ system using horizon shifting strategies applied to receding horizon optimal control problems. Empirical-based evidence of 30% chilled water energy savings and 22% total HVAC energy savings achievable with the BuildingIQ system operating in the APS Office Building on-site at Argonne.

  14. Whisker: a client-server high-performance multimedia research control system.

    PubMed

    Cardinal, Rudolf N; Aitken, Michael R F

    2010-11-01

    We describe an original client-server approach to behavioral research control and the Whisker system, a specific implementation of this design. The server process controls several types of hardware, including digital input/output devices, multiple graphical monitors and touchscreens, keyboards, mice, and sound cards. It provides a way to access this hardware for client programs, communicating with them via a simple text-based network protocol based on the standard Internet protocol. Clients to implement behavioral tasks may be written in any network-capable programming language. Applications to date have been in experimental psychology and behavioral and cognitive neuroscience, using rodents, humans, nonhuman primates, dogs, pigs, and birds. This system is flexible and reliable, although there are potential disadvantages in terms of complexity. Its design, features, and performance are described.

  15. MutationFinder: a high-performance system for extracting point mutation mentions from text.

    PubMed

    Caporaso, J Gregory; Baumgartner, William A; Randolph, David A; Cohen, K Bretonnel; Hunter, Lawrence

    2007-07-15

    Discussion of point mutations is ubiquitous in biomedical literature, and manually compiling databases or literature on mutations in specific genes or proteins is tedious. We present an open-source, rule-based system, MutationFinder, for extracting point mutation mentions from text. On blind test data, it achieves nearly perfect precision and a markedly improved recall over a baseline. MutationFinder, along with a high-quality gold standard data set, and a scoring script for mutation extraction systems have been made publicly available. Implementations, source code and unit tests are available in Python, Perl and Java. MutationFinder can be used as a stand-alone script, or imported by other applications. http://bionlp.sourceforge.net.

  16. The Use of High Performance Computing (HPC) to Strengthen the Development of Army Systems

    DTIC Science & Technology

    2011-11-01

    IEDs. The vehicle is large (14-25 tons) and costs from $500,000 to $ 1M . It is difficult to handle on narrow dirt roads and has a tendency to roll...keeps adding volumes of new data from the many telescopes searching the sky. The size of these data sets requires the power of the latest in high...thereby enabling faster system deployment • Reduce experimental testing time and effort through analysis of virtual prototypes The CREATE program

  17. Optimisation and coupling of high-performance photocyclic initiating systems for efficient holographic materials (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ley, Christian; Carré, Christian; Ibrahim, Ahmad; Allonas, Xavier

    2017-05-01

    For fabrication of diffractive optical elements or for holographic data storage, photopolymer materials have turned out to be serious candidates, taking into account their performances such as high spatial resolution, dry processing capability, ease of use, high versatility. From the chemical point of view, several organic materials are able to exhibit refractive index changes resulting from polymerization, crosslinking or depolymerization, such as mixtures of monomers with several reactive functions and oligomers, associated to additives, fillers and to a photoinitiating system (PIS). In this work, the efficiencies of two and three component PIS as holographic recording materials are analyzed in term of photopolymerization kinetics and diffraction yield. The selected systems are based on visible dyes, electron donor and electron acceptor. In order to investigate the influence of the photophysical properties of dye on the holographic recording material performance time resolved and steady state spectroscopic studies of the PIS are presented. This detailed photochemical studies of the PIS outline the possible existence of photocyclic initiating systems (PCIS) where the dye is regenerated during the chemical process. Simultaneously, these visible systems are associated to fluorinated acrylate monomers for the recording of transmission gratings. To get more insight into the hologram formation, gratings' recording curves were compared to those of monomer to polymer conversion obtained by real time Fourier transform infrared spectroscopy. This work outlines the importance of the coupling of the the photochemical reactions and the holographic resin. Moreover the application of the PCIS in holographic recording outlines the importance of the photochemistry on final holographic material properties: here a sensitive material with high diffraction yield is described. Indeed, this work outlines the importance of the coupling between the photochemistry underlying the radicals

  18. Development of a high-performance multichannel system for time-correlated single photon counting

    NASA Astrophysics Data System (ADS)

    Peronio, P.; Cominelli, A.; Acconcia, G.; Rech, I.; Ghioni, M.

    2017-05-01

    Time-Correlated Single Photon Counting (TCSPC) is one of the most effective techniques for measuring weak and fast optical signals. It outperforms traditional "analog" techniques due to its high sensitivity along with high temporal resolution. Despite those significant advantages, a main drawback still exists, which is related to the long acquisition time needed to perform a measurement. In past years many TCSPC systems have been developed with higher and higher number of channels, aimed to dealing with that limitation. Nevertheless, modern systems suffer from a strong trade-off between parallelism level and performance: the higher the number of channels the poorer the performance. In this work we present the design of a 32x32 TCSPC system meant for overtaking the existing trade-off. To this aim different technologies has been employed, to get the best performance both from detectors and sensing circuits. The exploitation of different technologies will be enabled by Through Silicon Vias (TSVs) which will be investigated as a possible solution for connecting the detectors to the sensing circuits. When dealing with a high number of channels, the count rate is inevitably set by the affordable throughput to the external PC. We targeted a throughput of 10Gb/s, which is beyond the state of the art, and designed the number of TCSPC channels accordingly. A dynamic-routing logic will connect the detectors to the lower number of acquisition chains.

  19. A biolized, compact, low noise, high performance implantable electromechanical ventricular assist system.

    PubMed

    Sasaki, T; Takatani, S; Shiono, M; Sakuma, I; Noon, G P; Nosé, Y; DeBakey, M E

    1991-01-01

    An implantable electromechanical ventricular assist system (VAS) intended for permanent human use was developed. It consisted of a conically shaped pumping chamber, a polyolefin (Hexsyn) rubber diaphragm attached to a pusher-plate, and a compact actuator with a direct current brushless motor and a planetary rollerscrew. The outer diameter was 97 mm, and the total thickness was 70 mm. This design was chosen to give a stroke volume of 63 ml. The device weighs 620 g, with a total volume of 360 ml. The pump can provide 8 L/min flow against 120 mmHg afterload with a preload of 10 mmHg. The inner surface of the device, including the pumping chamber and diaphragm, was made biocompatible with a dry gelatin coating. To date, two subacute (2 and 6 day) calf studies have been conducted. The pump showed reasonable anatomic fit inside the left thorax, and the entire system functioned satisfactorily in both the fill-empty mode using the Hall effect sensor signals and the conventional fixed rate mode. There were no thromboembolic complications despite no anticoagulation therapy. The system now is being endurance tested greater than 10 weeks (9 million cycles). This VAS is compact, low noise, easy to control, and has excellent biocompatibility.

  20. Novel digital logic gate for high-performance CMOS imaging system

    NASA Astrophysics Data System (ADS)

    Chung, Hoon H.; Joo, Youngjoong

    2004-06-01

    In these days, the CMOS image sensors are commonly used in many low resolution applications because the CMOS imaging system has several advantages against the conventional CCD imaging system. However, there are still several problems for the realization of the single-chip CMOS imaging system. One main problem is the substrate coupling noise, which is caused by the digital switching noise. Because the CMOS image sensors share the same substrate with surrounding digital circuit, it is difficult for the CMOS image sensor to get a good performance. In order to investigate the substrate coupling noise effect of the CMOS image sensor, the conventional CMOS logic, C-CBL (Complementary-Current balanced logic) and proposed low switching noise logic are simulated and compared. Consequently, the proposed logic compensates not only the large digital switching noise of conventional CMOS logic ,but also the huge power consumption of the C-CBL. Both the total instantaneous current behaviors on the power supply and the peak-to-peak voltages of the substrate voltage variation (di/dt noise) are investigated. The simulation is performed by AMI 0.5μm CMOS technology.

  1. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  2. Self-similar module for FP/LNS arithmetic in high-performance FPGA systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Lambert; Mohl, Stefan

    2005-06-01

    The scientific community has gratefully embraced floating-point arithmetic to escape the close attention for accuracy and precision required in fixed-point computational styles. Though its deficiencies are well known, the role of the floating-point system as standard has kept other number representation systems from coming into practice. The paper discusses the relation between fixed and floating-point numbers from a pragmatic point of view that allows to mix both systems to optimize FPGA-based hardware accelerators. The method is developed for the Mitrion "processor on demand" technology, where a computationally intensive algorithm is transformed into a dedicated. The large gap in cycle time between fixed and floating-point operations and between direct and reverse operations makes the on-chip control for the fine-grain pipelines of parallel logic very complicated. Having alternative hardware realizations available can alleviate this. The paper uses a conjunctive notation, also known as DIGILOG, to introduce a flexible means in creating configurable arithmetic of arbitrary order using a single module type. This allows the Mitrion hardware compiler to match the hardware closer to the demands of the specific algorithm. Typical applications are in molecular simulation and real-time image analysis.

  3. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  4. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    PubMed

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-02

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level.

  5. High performance 3-coil wireless power transfer system for the 512-electrode epiretinal prosthesis.

    PubMed

    Zhao, Yu; Nandra, Mandheerej; Yu, Chia-Chen; Tai, Yu-chong

    2012-01-01

    The next-generation retinal prostheses feature high image resolution and chronic implantation. These features demand the delivery of power as high as 100 mW to be wireless and efficient. A common solution is the 2-coil inductive power link, used by current retinal prostheses. This power link tends to include a larger-size extraocular receiver coil coupled to the external transmitter coil, and the receiver coil is connected to the intraocular electrodes through a trans-sclera trans-choroid cable. In the long-term implantation of the device, the cable may cause hypotony (low intraocular pressure) and infection. However, when a 2-coil system is constructed from a small-size intraocular receiver coil, the efficiency drops drastically which may induce over heat dissipation and electromagnetic field exposure. Our previous 2-coil system achieved only 7% power transfer. This paper presents a fully intraocular and highly efficient wireless power transfer system, by introducing another inductive coupling link to bypass the trans-sclera trans-choroid cable. With the specific equivalent load of our customized 512-electrode stimulator, the current 3-coil inductive link was measured to have the overall power transfer efficiency around 36%, with 1-inch separation in saline. The high efficiency will favorably reduce the heat dissipation and electromagnetic field exposure to surrounding human tissues. The effect of the eyeball rotation on the power transfer efficiency was investigated as well. The efficiency can still maintain 14.7% with left and right deflection of 30 degree during normal use. The surgical procedure for the coils' implantation into the porcine eye was also demonstrated.

  6. High Performance Fuel Cell and Electrolyzer Membrane Electrode Assemblies (MEAs) for Space Energy Storage Systems

    NASA Technical Reports Server (NTRS)

    Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.

    2012-01-01

    Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.

  7. High Performance Fuel Cell and Electrolyzer Membrane Electrode Assemblies (MEAs) for Space Energy Storage Systems

    NASA Technical Reports Server (NTRS)

    Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.

    2012-01-01

    Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.

  8. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    PubMed Central

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center. Images PMID:7703940

  9. A high-performance multilane microdevice system designed for the DNA forensics laboratory.

    PubMed

    Goedecke, Nils; McKenna, Brian; El-Difrawy, Sameh; Carey, Loucinda; Matsudaira, Paul; Ehrlich, Daniel

    2004-06-01

    We report preliminary testing of "GeneTrack", an instrument designed for the specific application of multiplexed short tandem repeat (STR) DNA analysis. The system supports a glass microdevice with 16 lanes of 20 cm effective length and double-T cross injectors. A high-speed galvanometer-scanned four-color detector was specially designed to accommodate the high elution rates on the microdevice. All aspects of the system were carefully matched to practical crime lab requirements for rapid reproducible analysis of crime-scene DNA evidence in conjunction with the United States DNA database (CODIS). Statistically significant studies demonstrate that an absolute, three-sigma, peak accuracy of 0.4-0.9 base pair (bp) can be achieved for the CODIS 13-locus multiplex, utilizing a single channel per sample. Only 0.5 microL of PCR product is needed per lane, a significant reduction in the consumption of costly chemicals in comparison to commercial capillary machines. The instrument is also designed to address problems in temperature-dependent decalibration and environmental sensitivity, which are weaknesses of the commercial capillary machines for the forensics application.

  10. High performance electrophoresis system for site-specific entrapment of nanoparticles in a nanoarray

    NASA Astrophysics Data System (ADS)

    Han, Jin-Hee; Lakshmana, Sudheendra; Kim, Hee-Joo; Hass, Elizabeth A.; Gee, Shirley; Hammock, Bruce D.; Kennedy, Ian

    2010-02-01

    A nanoarray, integrated with an electrophoretic system, was developed to trap nanoparticles into their corresponding nanowells. This nanoarray overcomes the complications of losing the function and activity of the protein binding to the surface in conventional microarrays by using minimum amounts of sample. The nanoarray is also superior to other biosensors that use immunoassays in terms of lowering the limit of detection to the femto- or atto-molar level. In addition, our electrophoretic particle entrapment system (EPES) is able to effectively trap the nanoparticles using a low trapping force for a short duration. Therefore, good conditions for biological samples conjugated with particles can be maintained. The channels were patterned onto a bi-layer consisting of a PMMA and LOL coating on conductive indium tin oxide (ITO)-coated glass slide by using e-beam lithography. The suspensions of 170 nm-nanoparticles then were added to the chip that was connected to a positive voltage. On top of the droplet, another ITO-coated-glass slide was covered and connected to a ground terminal. Negatively charged fluorescent nanoparticles (blue emission) were selectively trapped onto the ITO surface at the bottom of the wells by following electric field lines. Numerical modeling was performed by using commercially available software, COMSOL Multiphysics to provide better understanding about the phenomenon of electrophoresis in a nanoarray. Simulation results are also useful for optimally designing a nanoarray for practical applications.

  11. Design of high performance multivariable control systems for supermaneuverable aircraft at high angle of attack

    NASA Technical Reports Server (NTRS)

    Valavani, Lena

    1995-01-01

    The main motivation for the work under the present grant was to use nonlinear feedback linearization methods to further enhance performance capabilities of the aircraft, and robustify its response throughout its operating envelope. The idea was to use these methods in lieu of standard Taylor series linearization, in order to obtain a well behaved linearized plant, in its entire operational regime. Thus, feedback linearization was going to constitute an 'inner loop', which would then define a 'design plant model' to be compensated for robustness and guaranteed performance in an 'outer loop' application of modern linear control methods. The motivation for this was twofold; first, earlier work had shown that by appropriately conditioning the plant through conventional, simple feedback in an 'inner loop', the resulting overall compensated plant design enjoyed considerable enhancement of performance robustness in the presence of parametric uncertainty. Second, the nonlinear techniques did not have any proven robustness properties in the presence of unstructured uncertainty; a definition of robustness (and performance) is very difficult to achieve outside the frequency domain; to date, none is available for the purposes of control system design. Thus, by proper design of the outer loop, such properties could still be 'injected' in the overall system.

  12. Metal-based anode for high performance bioelectrochemical systems through photo-electrochemical interaction

    NASA Astrophysics Data System (ADS)

    Liang, Yuxiang; Feng, Huajun; Shen, Dongsheng; Long, Yuyang; Li, Na; Zhou, Yuyang; Ying, Xianbin; Gu, Yuan; Wang, Yanfeng

    2016-08-01

    This paper introduces a novel composite anode that uses light to enhance current generation and accelerate biofilm formation in bioelectrochemical systems. The composite anode is composed of 316L stainless steel substrate and a nanostructured α-Fe2O3 photocatalyst (PSS). The electrode properties, current generation, and biofilm properties of the anode are investigated. In terms of photocurrent, the optimal deposition and heat-treatment times are found to be 30 min and 2 min, respectively, which result in a maximum photocurrent of 0.6 A m-2. The start-up time of the PSS is 1.2 days and the maximum current density is 2.8 A m-2, twice and 25 times that of unmodified anode, respectively. The current density of the PSS remains stable during 20 days of illumination. Confocal laser scanning microscope images show that the PSS could benefit biofilm formation, while electrochemical impedance spectroscopy indicates that the PSS reduce the charge-transfer resistance of the anode. Our findings show that photo-electrochemical interaction is a promising way to enhance the biocompatibility of metal anodes for bioelectrochemical systems.

  13. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    SciTech Connect

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  14. High Performance CMOS Light Detector with Dark Current Suppression in Variable-Temperature Systems

    PubMed Central

    Lin, Wen-Sheng; Sung, Guo-Ming; Lin, Jyun-Long

    2016-01-01

    This paper presents a dark current suppression technique for a light detector in a variable-temperature system. The light detector architecture comprises a photodiode for sensing the ambient light, a dark current diode for conducting dark current suppression, and a current subtractor that is embedded in the current amplifier with enhanced dark current cancellation. The measured dark current of the proposed light detector is lower than that of the epichlorohydrin photoresistor or cadmium sulphide photoresistor. This is advantageous in variable-temperature systems, especially for those with many infrared light-emitting diodes. Experimental results indicate that the maximum dark current of the proposed current amplifier is approximately 135 nA at 125 °C, a near zero dark current is achieved at temperatures lower than 50 °C, and dark current and temperature exhibit an exponential relation at temperatures higher than 50 °C. The dark current of the proposed light detector is lower than 9.23 nA and the linearity is approximately 1.15 μA/lux at an external resistance RSS = 10 kΩ and environmental temperatures from 25 °C to 85 °C. PMID:28025530

  15. High Performance CMOS Light Detector with Dark Current Suppression in Variable-Temperature Systems.

    PubMed

    Lin, Wen-Sheng; Sung, Guo-Ming; Lin, Jyun-Long

    2016-12-23

    This paper presents a dark current suppression technique for a light detector in a variable-temperature system. The light detector architecture comprises a photodiode for sensing the ambient light, a dark current diode for conducting dark current suppression, and a current subtractor that is embedded in the current amplifier with enhanced dark current cancellation. The measured dark current of the proposed light detector is lower than that of the epichlorohydrin photoresistor or cadmium sulphide photoresistor. This is advantageous in variable-temperature systems, especially for those with many infrared light-emitting diodes. Experimental results indicate that the maximum dark current of the proposed current amplifier is approximately 135 nA at 125 °C, a near zero dark current is achieved at temperatures lower than 50 °C, and dark current and temperature exhibit an exponential relation at temperatures higher than 50 °C. The dark current of the proposed light detector is lower than 9.23 nA and the linearity is approximately 1.15 μA/lux at an external resistance RSS = 10 kΩ and environmental temperatures from 25 °C to 85 °C.

  16. How to polarise all neutrons in one beam: a high performance polariser and neutron transport system

    NASA Astrophysics Data System (ADS)

    Rodriguez, D. Martin; Bentley, P. M.; Pappas, C.

    2016-09-01

    Polarised neutron beams are used in disciplines as diverse as magnetism,soft matter or biology. However, most of these applications often suffer from low flux also because the existing neutron polarising methods imply the filtering of one of the spin states, with a transmission of 50% at maximum. With the purpose of using all neutrons that are usually discarded, we propose a system that splits them according to their polarisation, flips them to match the spin direction, and then focuses them at the sample. Monte Carlo (MC) simulations show that this is achievable over a wide wavelength range and with an outstanding performance at the price of a more divergent neutron beam at the sample position.

  17. Low-cost high performance adaptive optics real-time controller in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Chen, Shanqiu; Liu, Chao; Zhao, Enyi; Xian, Hao; Xu, Bing; Ye, Yutang

    2014-11-01

    This paper proposed a low-cost and high performance adaptive optics real-time controller in free space optical communication system. Real-time controller is constructed with a 4-core CPU with Linux operation system patched with Real-Time Application Interface (RTAI) and a frame-grabber, and the whole cost is below $6000. Multi-core parallel processing scheme and SSE instruction optimization for reconstruction process result in about 5 speedup, and overall processing time for this 137-element adaptive optic system can reach below 100 us and with latency about 50 us by utilizing streamlined processing scheme, which meet the requirement of processing at frequency over 1709 Hz. Real-time data storage system designed by circle buffer make this system to store consecutive image frames and provide an approach to analysis the image data and intermediate data such as slope information.

  18. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    SciTech Connect

    Wu, Chase Qishi

    2016-12-01

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink data flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to

  19. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF

  20. TheSNPpit—A High Performance Database System for Managing Large Scale SNP Data

    PubMed Central

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  1. TheSNPpit-A High Performance Database System for Managing Large Scale SNP Data.

    PubMed

    Groeneveld, Eildert; Lichtenberg, Helmut

    2016-01-01

    The fast development of high throughput genotyping has opened up new possibilities in genetics while at the same time producing considerable data handling issues. TheSNPpit is a database system for managing large amounts of multi panel SNP genotype data from any genotyping platform. With an increasing rate of genotyping in areas like animal and plant breeding as well as human genetics, already now hundreds of thousand of individuals need to be managed. While the common database design with one row per SNP can manage hundreds of samples this approach becomes progressively slower as the size of the data sets increase until it finally fails completely once tens or even hundreds of thousands of individuals need to be managed. TheSNPpit has implemented three ideas to also accomodate such large scale experiments: highly compressed vector storage in a relational database, set based data manipulation, and a very fast export written in C with Perl as the base for the framework and PostgreSQL as the database backend. Its novel subset system allows the creation of named subsets based on the filtering of SNP (based on major allele frequency, no-calls, and chromosomes) and manually applied sample and SNP lists at negligible storage costs, thus avoiding the issue of proliferating file copies. The named subsets are exported for down stream analysis. PLINK ped and map files are processed as in- and outputs. TheSNPpit allows management of different panel sizes in the same population of individuals when higher density panels replace previous lower density versions as it occurs in animal and plant breeding programs. A completely generalized procedure allows storage of phenotypes. TheSNPpit only occupies 2 bits for storing a single SNP implying a capacity of 4 mio SNPs per 1MB of disk storage. To investigate performance scaling, a database with more than 18.5 mio samples has been created with 3.4 trillion SNPs from 12 panels ranging from 1000 through 20 mio SNPs resulting in a

  2. High-Performance Water Electrolysis System with Double Nanostructured Superaerophobic Electrodes.

    PubMed

    Xu, Wenwen; Lu, Zhiyi; Wan, Pengbo; Kuang, Yun; Sun, Xiaoming

    2016-05-01

    Catalysts screening and structural optimization are both essential for pursuing a high-efficient water electrolysis system (WES) with reduced energy supply. This study demonstrates an advanced WES with double superaerophobic electrodes, which are achieved by constructing a nanostructured NiMo alloy and NiFe layered double hydroxide (NiFe-LDH) films for hydrogen evolution and oxygen evolution reactions, respectively. The superaerophobic property gives rise to significantly reduced adhesion forces to gas bubbles and thereby accelerates the hydrogen and oxygen bubble releasing behaviors. Benefited from these metrics and the high intrinsic activities of catalysts, this WES affords an early onset potential (≈1.5 V) for water splitting and ultrafast catalytic current density increase (≈0.83 mA mV(-1) ), resulting in ≈2.69 times higher performance compared to the commercial Pt/C and IrO2 /C catalysts based counterpart under 1.9 V. Moreover, enhanced performance at high temperature as well as prominent stability further demonstrate the practical application of this WES. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Architecture of a high-performance PACS based on a shared file system

    NASA Astrophysics Data System (ADS)

    Glicksman, Robert A.; Wilson, Dennis L.; Perry, John H.; Prior, Fred W.

    1992-07-01

    The Picture Archive and Communication System developed by Loral Western Development Laboratories and Siemens Gammasonics Incorporated utilizes an advanced, high speed, fault tolerant image file server or Working Storage Unit (WSU) combined with 100 Mbit per second fiber optic data links. This central shared file server is capable of supporting the needs of more than one hundred workstations and acquisition devices at interactive rates. If additional performance is required, additional working storage units may be configured in a hyper-star topology. Specialized processing and display hardware is used to enhance Apple Macintosh personal computers to provide a family of low cost, easy to use, yet extremely powerful medical image workstations. The Siemens LiteboxTM application software provides a consistent look and feel to the user interface of all workstation in the family. Modern database and wide area communications technologies combine to support not only large hospital PACS but also outlying clinics and smaller facilities. Basic RIS functionality is integrated into the PACS database for convenience and data integrity.

  4. Microvalve Enabled Digital Microfluidic Systems for High Performance Biochemical and Genetic Analysis.

    PubMed

    Jensen, Erik C; Zeng, Yong; Kim, Jungkyu; Mathies, Richard A

    2010-12-01

    Microfluidic devices offer unparalleled capability for digital microfluidic automation of sample processing and complex assay protocols in medical diagnostic and research applications. In our own work, monolithic membrane valves have enabled the creation of two platforms that precisely manipulate discrete, nanoliter-scale volumes of sample. The digital microfluidic Automaton uses two-dimensional microvalve arrays to combinatorially process nanoliter-scale sample volumes. This programmable system enables rapid integration of diverse assay protocols using a universal processing architecture. Microfabricated emulsion generator array (MEGA) devices integrate actively controlled 3-microvalve pumps to enable on-demand generation of uniform droplets for statistical encapsulation of microbeads and cells. A MEGA device containing 96 channels confers the capability of generating up to 3.4 × 10(6) nanoliter-volume droplets per hour for ultrahigh-throughput detection of rare mutations in a vast background of normal genotypes. These novel digital microfluidic platforms offer significant enhancements in throughput, sensitivity, and programmability for automated sample processing and analysis.

  5. Numerical simulation of the convective heat transfer on high-performance computing systems

    NASA Astrophysics Data System (ADS)

    Stepanov, S. P.; Vasilyeva, M. V.; Vasilyev, V. I.

    2016-10-01

    In this work, we consider a coupled system of equations for the convective heat transfer and flow problems, which describes the processes of the natural or forced convection in some bounded area. Mathematical model include the Navier-Stokes equation for flow and the heat transfer equation for the heat transfer. Numerical implementation is based on the finite element method, which allows to take into account the complex geometry of the modeled objects. For numerical stabilization of the convective heat transfer equation for high Peclet numbers, we use streamline upwinding Petrov-Galerkin (SUPG) method. The results of the numerical simulations are presented for the 2D formulation. As the test problems, we consider the flow and heat transfer problems in technical construction under the conditions of heat sources and influence of air temperature. We couple this formulation with heat transfer problem in the surrounding grounds and investigate the influence of the technical construction to the ground in condition of the permafrost and the influence of the grounds to the temperature distribution in the construction. Numerical computation are performed on the computational cluster of the North-Eastern Federal University.

  6. Coal-fired high performance power generating system. Quarterly progress report, October 1--December 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    Our team has outlined a research plan based on an optimized analysis of a 250 MWe combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FUTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The Cycle Optimization effort under Task 2 outlines the evolution of our designs. The basic combined cycle approach now includes exhaust gas recirculation to quench the flue gas before it enters the convective air heater. By selecting the quench gas from a downstream location it will be clean enough and cool enough (ca. 300F) to be driven by a commercially available fan and still minimize the volume of the convective air heater. Further modeling studies on the long axial flame, under Task 3, have demonstrated that this configuration is capable of providing the necessary energy flux to the radiant air panels. This flame with its controlled mixing constrains the combustion to take place in a fuel rich environment, thus minimizing the NO{sub x} production. Recent calculations indicate that the NO{sub x} produced is low enough that the SNCR section can further reduce it to within the DOE goal of 0. 15 lbs/MBTU of fuel input. Also under Task 3 the air heater design optimization continued.

  7. Kinetic study on external mass transfer in high performance liquid chromatography system.

    PubMed

    Miyabe, Kanji; Kawaguchi, Yuuki; Guiochon, Georges

    2010-04-30

    External mass transfer coefficients (k(f)) were measured for a column packed with fully porous C(18)-silica spherical particles (50.6 microm in diameter), eluted with a methanol/water mixture (70/30, v/v). The pulse response and the peak-parking methods were used. Profiles of elution peaks of alkylbenzene homologues were recorded at flow rates between 0.2 and 2.0 mL min(-1). Peak-parking experiments were conducted under the same conditions, to measure intraparticle and pore diffusivity, and surface diffusion coefficients. Finally, the values of k(f) for these compounds at 298 K were derived from the first and second moments of the elution peaks by subtracting the contribution of intraparticle diffusion to band broadening. As a result, the Sherwood number (Sh) was measured under such conditions that the Reynolds (Re) and the Schmidt numbers (Sc) varied from 0.004 to 0.05 and from 1.8x10(3) to 2.7x10(3), respectively. We found that Sh is proportional to Re(alpha) and Sc(beta) and that the correlation between these three nondimensional parameters is almost the same as those given by conventional literature equations. The values of alpha and beta were close to those in the literature correlations, between 0.26 and 0.41 and between 0.31 and 0.36, respectively. The use of the Wilson-Geankoplis equation to estimate k(f) values entails a relative error of ca. 15%. So, conventional literature correlations provide correct estimates of k(f) in HPLC systems, even for particle sizes of the order of a micrometer.

  8. Cosensitized Porphyrin System for High-Performance Solar Cells with TOF-SIMS Analysis.

    PubMed

    Wu, Wenjun; Xiang, Huaide; Fan, Wei; Wang, Jinglin; Wang, Haifeng; Hua, Xin; Wang, Zhaohui; Long, Yitao; Tian, He; Zhu, Wei-Hong

    2017-05-17

    To date, development of organic sensitizers has been predominately focused on light harvesting, highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels, and the electron transferring process. In contrast, their adsorption mode as well as the dynamic loading behavior onto nanoporous TiO2 is rarely considered. Herein, we have employed the time-of-flight secondary ion mass spectrometry (TOF-SIMS) to gain insight into the competitive dye adsorption mode and kinetics in the cosensitized porphyrin system. Using novel porphyrin dye FW-1 and D-A-π-A featured dye WS-5, the different bond-breaking mode in TOF-SIMS and dynamic dye-loading amount during the coadsorption process are well-compared with two different anchoring groups, such as benzoic acid and cyanoacrylic acid. With the bombardment mode in TOF-SIMS spectra, we have speculated that the cyano group grafts onto nanoporous TiO2 as tridentate binding for the common anchoring unit of cyanoacrylic acid and confirmed it through extensive first-principles density functional theory calculation by anchoring either the carboxyl or cyano group, which shows that the cyano group can efficiently participate in the adsorption of the WS-5 molecule onto the TiO2 nanocrystal. The grafting reinforcement interaction between the cyano group and TiO2 in WS-5 can well-explain the rapid adsorption characteristics. A strong coordinate bond between the lone pair of electrons on the nitrogen or oxygen atom and the Lewis acid sites of TiO2 can increase electron injection efficiencies with respect to those from the bond between the benzoic acid group and the Brønsted acid sites of the TiO2 surface. Upon optimization of the coadsorption process with dye WS-5, the photoelectric conversion efficiency based on porphyrin dye FW-1 is increased from 6.14 to 9.72%. The study on the adsorption dynamics of organic sensitizers with TOF-SIMS analysis might provide a new venue for improvement of cosensitized solar cells.

  9. High Performance Microbial Fuel Cells and Supercapacitors Using Micro-Electro-Mechanical System (MEMS) Technology

    NASA Astrophysics Data System (ADS)

    Ren, Hao

    A Microbial fuel cell (MFC) is a bio-inspired carbon-neutral, renewable electrochemical converter to extract electricity from catabolic reaction of micro-organisms. It is a promising technology capable of directly converting the abundant biomass on the planet into electricity and potentially alleviate the emerging global warming and energy crisis. The current and power density of MFCs are low compared with conventional energy conversion techniques. Since its debut in 2002, many studies have been performed by adopting a variety of new configurations and structures to improve the power density. The reported maximum areal and volumetric power densities range from 19 mW/m2 to 1.57 W/m2 and from 6.3 W/m3 to 392 W/m 3, respectively, which are still low compared with conventional energy conversion techniques. In this dissertation, the impact of scaling effect on the performance of MFCs are investigated, and it is found that by scaling down the characteristic length of MFCs, the surface area to volume ratio increases and the current and power density improves. As a result, a miniaturized MFC fabricated by Micro-Electro-Mechanical System (MEMS) technology with gold anode is presented in this dissertation, which demonstrate a high power density of 3300 W/m3. The performance of the MEMS MFC is further improved by adopting anodes with higher surface area to volume ratio, such as carbon nanotube (CNT) and graphene based anodes, and the maximum power density is further improved to a record high power density of 11220 W/m3. A novel supercapacitor by regulating the respiration of the bacteria is also presented, and a high power density of 531.2 A/m2 (1,060,000 A/m3) and 197.5 W/m2 (395,000 W/m3), respectively, are marked, which are one to two orders of magnitude higher than any previously reported microbial electrochemical techniques.

  10. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans Joachim; Duffy, Donald R.

    1989-01-01

    A summary is presented of the concentrator conceptual design work performed under a NASA-funded project. The design study centers around two basic efforts: conceptual design of a self-deploying, high-performance parabolic concentrator; and materials selection for a lightweight, shape-stable concentrator. The primary structural material selected for the concentrator is PEEK/carbon fiber composite. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes a circular shape with a void in the center. The deployable solar concentrator concept is applicable to a range of solar dynamic power systems of 25 kWe to more than 75 kWe.

  11. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans Joachim; Duffy, Donald R.

    1989-01-01

    A summary is presented of the concentrator conceptual design work performed under a NASA-funded project. The design study centers around two basic efforts: conceptual design of a self-deploying, high-performance parabolic concentrator; and materials selection for a lightweight, shape-stable concentrator. The primary structural material selected for the concentrator is PEEK/carbon fiber composite. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes a circular shape with a void in the center. The deployable solar concentrator concept is applicable to a range of solar dynamic power systems of 25 kWe to more than 75 kWe.

  12. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  13. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  14. Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011.

    SciTech Connect

    Wheeler, Kyle Bruce; Naegle, John Hunt; Wright, Brian J.; Benner, Robert E., Jr.; Shelburg, Jeffrey Scott; Pearson, David Benjamin; Johnson, Joshua Alan; Onunkwo, Uzoma A.; Zage, David John; Patel, Jay S.

    2011-09-01

    This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

  15. Application Characterization at Scale: Lessons learned from developing a distributed Open Community Runtime system for High Performance Computing

    SciTech Connect

    Landwehr, Joshua B.; Suetterlein, Joshua D.; Marquez, Andres; Manzano Franco, Joseph B.; Gao, Guang R.

    2016-05-16

    Since 2012, the U.S. Department of Energy’s X-Stack program has been developing solutions including runtime systems, programming models, languages, compilers, and tools for the Exascale system software to address crucial performance and power requirements. Fine grain programming models and runtime systems show a great potential to efficiently utilize the underlying hardware. Thus, they are essential to many X-Stack efforts. An abundant amount of small tasks can better utilize the vast parallelism available on current and future machines. Moreover, finer tasks can recover faster and adapt better, due to a decrease in state and control. Nevertheless, current applications have been written to exploit old paradigms (such as Communicating Sequential Processor and Bulk Synchronous Parallel processing). To fully utilize the advantages of these new systems, applications need to be adapted to these new paradigms. As part of the applications’ porting process, in-depth characterization studies, focused on both application characteristics and runtime features, need to take place to fully understand the application performance bottlenecks and how to resolve them. This paper presents a characterization study for a novel high performance runtime system, called the Open Community Runtime, using key HPC kernels as its vehicle. This study has the following contributions: one of the first high performance, fine grain, distributed memory runtime system implementing the OCR standard (version 0.99a); and a characterization study of key HPC kernels in terms of runtime primitives running on both intra and inter node environments. Running on a general purpose cluster, we have found up to 1635x relative speed-up for a parallel tiled Cholesky Kernels on 128 nodes with 16 cores each and a 1864x relative speed-up for a parallel tiled Smith-Waterman kernel on 128 nodes with 30 cores.

  16. Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE

    NASA Astrophysics Data System (ADS)

    Fasel, Markus

    2016-10-01

    High-Performance Computing Systems are powerful tools tailored to support large- scale applications that rely on low-latency inter-process communications to run efficiently. By design, these systems often impose constraints on application workflows, such as limited external network connectivity and whole node scheduling, that make more general-purpose computing tasks, such as those commonly found in high-energy nuclear physics applications, more difficult to carry out. In this work, we present a tool designed to simplify access to such complicated environments by handling the common tasks of job submission, software management, and local data management, in a framework that is easily adaptable to the specific requirements of various computing systems. The tool, initially constructed to process stand-alone ALICE simulations for detector and software development, was successfully deployed on the NERSC computing systems, Carver, Hopper and Edison, and is being configured to provide access to the next generation NERSC system, Cori. In this report, we describe the tool and discuss our experience running ALICE applications on NERSC HPC systems. The discussion will include our initial benchmarks of Cori compared to other systems and our attempts to leverage the new capabilities offered with Cori to support data-intensive applications, with a future goal of full integration of such systems into ALICE grid operations.

  17. Relationships of cognitive and metacognitive learning strategies to mathematics achievement in four high-performing East Asian education systems.

    PubMed

    Areepattamannil, Shaljan; Caleon, Imelda S

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education systems, memorization strategies were negatively associated with mathematics achievement, whereas control strategies were positively associated with mathematics achievement. However, the association between elaboration strategies and mathematics achievement was a mixed bag. In Shanghai-China and Korea, elaboration strategies were not associated with mathematics achievement. In Hong Kong-China and Singapore, on the other hand, elaboration strategies were negatively associated with mathematics achievement. Implications of these findings are briefly discussed.

  18. Small Delay and High Performance AD/DA Converters of Lease Circuit System for AM&FM Broadcast

    NASA Astrophysics Data System (ADS)

    Takato, Kenji; Suzuki, Dai; Ishii, Takashi; Kobayashi, Masato; Yamada, Hirokazu; Amano, Shigeru

    Many AM&FM broadcasting stations in Japan are connected by the leased circuit system of NTT. Small delay and high performance AD/DA converter was developed for the system. The system was designed based on ITU-T J.41 Recommendation (384kbps), the transmission signal is 11bit-32 kHz where the Gain-frequency characteristics between 40Hz to 15kHz have to be quite flat. The ΔΣAD/DA converter LSIs for audio application in the market today realize very high performance. However the performance is not enough for the leased circuit system. We found that it is not possible to meet the delay and Gain-frequency requirements only by using ΔΣAD/DA converter LSI in normal operation, because 15kHz the highest frequency and 16kHz Nyquist frequency are too close, therefore there are aliasing around Nyquist frequency. In this paper, we designed AD/DA architecture having small delay (1msec) and sharp cut off LPF (100dB attenuation at 16kHz, and 1500dB/Oct from 15kHz to 16kHz) by operating ΔΣAD/DA converter LSIs over-sampling rate such as 128kHz and by adding custom LPF designed Infinite Impulse Response (IIR) filter. The IIR filter is a 16th order elliptic type and it is consist of eight biquad filters in series. We described how to evaluate the stability of IIR filter theoretically by calculating frequency response, Pole and Zero Layout and impulse response of each biquad filter, and experimentally by adding overflow detection circuit on each filters and input overlord signal.

  19. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  20. Coal-fired high performance power generating system. Quarterly progress report, October 1, 1994--December 31, 1994

    SciTech Connect

    1995-08-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal-Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of (1) > 47% thermal efficiency; (2) NO{sub x}, SO{sub x} and particulates {<=}25% NSPS; (3) cost {>=}65% of heat input; (4) all solid wastes benign. In our design consideration, we have tried to render all waste streams benign and if possible convert them to a commercial product. It appears that vitrified slag has commercial values. If the flyash is reinjected through the furnace, along with the dry bottom ash, then the amount of the less valuable solid waste stream (ash) can be minimized. A limitation on this procedure arises if it results in the buildup of toxic metal concentrations in either the slag, the flyash or other APCD components. We have assembled analytical tools to describe the progress of specific toxic metals in our system. The outline of the analytical procedure is presented in the first section of this report. The strengths and corrosion resistance of five candidate refractories have been studied in this quarter. Some of the results are presented and compared for selected preparation conditions (mixing, drying time and drying temperatures). A 100 hour pilot-scale stagging combustor test of the prototype radiant panel is being planned. Several potential refractory brick materials are under review and five will be selected for the first 100 hour test. The design of the prototype panel is presented along with some of the test requirements.

  1. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  2. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    SciTech Connect

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-05-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  3. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  4. Automated high-performance cIMT measurement techniques using patented AtheroEdge™: a screening and home monitoring system.

    PubMed

    Molinari, Filippo; Meiburger, Kristen M; Suri, Jasjit

    2011-01-01

    The evaluation of the carotid artery wall is fundamental for the assessment of cardiovascular risk. This paper presents the general architecture of an automatic strategy, which segments the lumen-intima and media-adventitia borders, classified under a class of Patented AtheroEdge™ systems (Global Biomedical Technologies, Inc, CA, USA). Guidelines to produce accurate and repeatable measurements of the intima-media thickness are provided and the problem of the different distance metrics one can adopt is confronted. We compared the results of a completely automatic algorithm that we developed with those of a semi-automatic algorithm, and showed final segmentation results for both techniques. The overall rationale is to provide user-independent high-performance techniques suitable for screening and remote monitoring.

  5. Use of subcarrier multiplexing for self-routing of data packets in a high-performance system area network

    NASA Astrophysics Data System (ADS)

    Saraswat, Sanjay

    1998-10-01

    In self-routing packet networks, the state of intermediate nodes (switches) is set or reset on the basis of the information present in the packet header. Subcarrier multiplexing (SCM) modulates a number of frequency-separated RF sub-carriers onto a common laser at a single wavelength. SCM has the advantage of high data throughput. It also requires fewer opto-electronic components and avoids walk- off between header and payload due to fiber dispersion. In this paper we describe a novel use of sub-carrier multiplexing for self-routing of data packet within the switching fabric of a high performance system area network. Using SCM data packets are routed optically to the destination without being converted to the electrical domain at the intermediate stages within the network.

  6. Evaluation of C/C-SiC Composites as Potential Candidate Materials for High Performance Braking Systems

    NASA Astrophysics Data System (ADS)

    Saptono Duryat, Rahmat

    2016-05-01

    This paper is aimed at evaluating the characteristic and performance of C/C-SiC composites as potential candidate materials for high performance braking system. A set of material specifications had been derived from specific engineering design requirements. Analysis was performed by formulating the function(s), constraint(s), and objective(s) of design and materials selection. Function of a friction material is chiefly to provide friction, absorb and dissipate energy. It is done while withstanding load and maintaining the structural adequacy and characteristic of tribology at high temperature. Objective of the material selection and design is to maximize the absorption and dissipation of energy and to minimize weight and cost. Candidate materials were evaluated based on their friction and wear, thermal capacity and conductivity, structural properties, manufacturing properties, and densities. The present paper provides a state of the art example on how materials - function - geometry - design, are all interrelated.

  7. High-performance Sonitopia (Sonic Utopia): Hyper intelligent Material-based Architectural Systems for Acoustic Energy Harvesting

    NASA Astrophysics Data System (ADS)

    Heidari, F.; Mahdavinejad, M.

    2017-08-01

    The rate of energy consumption in all over the world, based on reliable statistics of international institutions such as the International Energy Agency (IEA) shows significant increase in energy demand in recent years. Periodical recorded data shows a continuous increasing trend in energy consumption especially in developed countries as well as recently emerged developing economies such as China and India. While air pollution and water contamination as results of high consumption of fossil energy resources might be consider as menace to civic ideals such as livability, conviviality and people-oriented cities. In other hand, automobile dependency, cars oriented design and other noisy activities in urban spaces consider as threats to urban life. Thus contemporary urban design and planning concentrates on rethinking about ecology of sound, reorganizing the soundscape of neighborhoods, redesigning the sonic order of urban space. It seems that contemporary architecture and planning trends through soundscape mapping look for sonitopia (Sonic + Utopia) This paper is to propose some interactive hyper intelligent material-based architectural systems for acoustic energy harvesting. The proposed architectural design system may be result in high-performance architecture and planning strategies for future cities. The ultimate aim of research is to develop a comprehensive system for acoustic energy harvesting which cover the aim of noise reduction as well as being in harmony with architectural design. The research methodology is based on a literature review as well as experimental and quasi-experimental strategies according the paradigm of designedly ways of doing and knowing. While architectural design has solution-focused essence in problem-solving process, the proposed systems had better be hyper intelligent rather than predefined procedures. Therefore, the steps of the inference mechanism of the research include: 1- understanding sonic energy and noise potentials as energy

  8. New generation high performance in situ polarized 3He system for time-of-flight beam at spallation sources

    NASA Astrophysics Data System (ADS)

    Jiang, C. Y.; Tong, X.; Brown, D. R.; Glavic, A.; Ambaye, H.; Goyette, R.; Hoffmann, M.; Parizzi, A. A.; Robertson, L.; Lauter, V.

    2017-02-01

    Modern spallation neutron sources generate high intensity neutron beams with a broad wavelength band applied to exploring new nano- and meso-scale materials from a few atomic monolayers thick to complicated prototype device-like systems with multiple buried interfaces. The availability of high performance neutron polarizers and analyzers in neutron scattering experiments is vital for understanding magnetism in systems with novel functionalities. We report the development of a new generation of the in situ polarized 3He neutron polarization analyzer for the Magnetism Reflectometer at the Spallation Neutron Source at Oak Ridge National Laboratory. With a new optical layout and laser system, the 3He polarization reached and maintained 84% as compared to 76% in the first-generation system. The polarization improvement allows achieving the transmission function varying from 50% to 15% for the polarized neutron beam with the wavelength band of 2-9 Angstroms. This achievement brings a new class of experiments with optimal performance in sensitivity to very small magnetic moments in nano systems and opens up the horizon for its applications.

  9. Development of a high-performance gantry system for a new generation of optical slope measuring profilers

    NASA Astrophysics Data System (ADS)

    Assoufid, Lahsen; Brown, Nathan; Crews, Dan; Sullivan, Joseph; Erdmann, Mark; Qian, Jun; Jemian, Pete; Yashchuk, Valeriy V.; Takacs, Peter Z.; Artemiev, Nikolay A.; Merthe, Daniel J.; McKinney, Wayne R.; Siewert, Frank; Zeschke, Thomas

    2013-05-01

    A new high-performance metrology gantry system has been developed within the scope of collaborative efforts of optics groups at the US Department of Energy synchrotron radiation facilities as well as the BESSY-II synchrotron at the Helmholtz Zentrum Berlin (Germany) and the participation of industrial vendors of x-ray optics and metrology instrumentation directed to create a new generation of optical slope measuring systems (OSMS) [1]. The slope measurement accuracy of the OSMS is expected to be <50 nrad, which is strongly required for the current and future metrology of x-ray optics for the next generation of light sources. The fabricated system was installed and commissioned (December 2012) at the Advanced Photon Source (APS) at Argonne National Laboratory to replace the aging APS Long Trace Profiler (APS LTP-II). Preliminary tests were conducted (in January and May 2012) using the optical system configuration of the Nanometer Optical Component Measuring Machine (NOM) developed at Helmholtz Zentrum Berlin (HZB)/BESSY-II. With a flat Si mirror that is 350 mm long and has 200 nrad rms nominal slope error over a useful length of 300 mm, the system provides a repeatability of about 53 nrad. This value corresponds to the design performance of 50 nrad rms accuracy for inspection of ultra-precise flat optics.

  10. Do perceived high performance work systems influence the relationship between emotional labour, burnout and intention to leave? A study of Australian nurses.

    PubMed

    Bartram, Timothy; Casimir, Gian; Djurkovic, Nick; Leggat, Sandra G; Stanton, Pauline

    2012-07-01

    The purpose of this article was to explore the relationships between perceived high performance work systems, emotional labour, burnout and intention to leave among nurses in Australia. Previous studies show that emotional labour and burnout are associated with an increase in intention to leave of nurses. There is evidence that high performance work systems are in association with a decrease in turnover. There are no previous studies that examine the relationship between high performance work systems and emotional labour. A cross-sectional, correlational survey. The study was conducted in Australia in 2008 with 183 nurses. Three hypotheses were tested with validated measures of emotional labour, burnout, intention to leave, and perceived high performance work systems. Principal component analysis was used to examine the structure of the measures. The mediation hypothesis was tested using Baron and Kenny's procedure and the moderation hypothesis was tested using hierarchical regression and the product-term. Emotional labour is positively associated with both burnout and intention to leave. Burnout mediates the relationship between emotional labour and intention to leave. Perceived high performance work systems negatively moderates the relationship between emotional labour and burnout. Perceived high performance work systems not only reduces the strength of the negative effect of emotional labour on burnout but also has a unique negative effect on intention to leave. Ensuring effective human resource management practice through the implementation of high performance work systems may reduce the burnout associated with emotional labour. This may assist healthcare organizations to reduce nurse turnover. © 2012 Blackwell Publishing Ltd.

  11. Engineering development of coal-fired high-performance power systems. Progress report, April 1--June 30, 1996

    SciTech Connect

    1996-12-31

    In Phase 1 of the project, a conceptual design of a coal-fired, high-performance power system (HIPPS) was developed, and small-scale R and D was done in critical areas of the design. The current phase of the project includes development through the pilot plant stage and design of a prototype plant that would be built in Phase 3. The power-generating system being developed in this project will be an improvement over current coal-fired systems. It is a combined-cycle plant. This arrangement is referred to as the All Coal HIPPS because it does not require any other fuels for normal operation. A fluidized bed, air-blown pyrolyzer converts coal into fuel gas and char. The char is fired in a high-temperature advanced furnace (HITAF) which heats both air for a gas turbine and steam for a steam turbine. The fuel gas from the pyrolyzer goes to a topping combustor where it is used to raise the air entering the gas turbine to 1288 C. In addition to the HITAF, steam duty is achieved with a heat-recovery steam generator (HRSG) in the gas turbine exhaust stream and economizers in the HITAF flue gas exhaust stream. Progress during the quarter is described.

  12. Achieving a high-performance health care system with universal access: what the United States can learn from other countries.

    PubMed

    Ginsburg, Jack A; Doherty, Robert B; Ralston, J Fred; Senkeeto, Naomi; Cooke, Molly; Cutler, Charles; Fleming, David A; Freeman, Brian P; Gluckman, Robert A; Liebow, Mark; McLean, Robert M; Musana, Kenneth A; Nichols, Patrick M; Purtle, Mark W; Reynolds, P Preston; Weaver, Kathleen M; Dale, David C; Levine, Joel S; Stubbs, Joseph W

    2008-01-01

    This position paper concerns improving health care in the United States. Unlike previous highly focused policy papers by the American College of Physicians, this article takes a comprehensive approach to improving access, quality, and efficiency of care. The first part describes health care in the United States. The second compares it with health care in other countries. The concluding section proposes lessons that the United States can learn from these countries and recommendations for achieving a high-performance health care system in the United States. The articles are based on a position paper developed by the American College of Physicians' Health and Public Policy Committee. This policy paper (not included in this article) also provides a detailed analysis of health care systems in 12 other industrialized countries. Although we can learn much from other health systems, the College recognizes that our political and social culture, demographics, and form of government will shape any solution for the United States. This caution notwithstanding, we have identified several approaches that have worked well for countries like ours and could probably be adapted to the unique circumstances in the United States.

  13. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans J.

    1991-01-01

    NASA has initiated technology development programs to develop advanced solar dynamic power systems and components for space applications beyond 2000. Conceptual design work that was performed is described. The main efforts were the: (1) conceptual design of self-deploying, high-performance parabolic concentrator; and (2) materials selection for a lightweight, shape-stable concentrator. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes an annular shape with a void in the center. This deployable concentrator concept is applicable to a range of solar dynamic power systems of 25 kW sub e to in excess of 75 kW sub e. The concept allows for a family of power system sizes all using the same packaging and deployment technique. The primary structural material selected for the concentrator is a polyethyl ethylketone/carbon fiber composite also referred to as APC-2 or Vitrex. This composite has a nearly neutral coefficient of thermal expansion which leads to shape stable characteristics under thermal gradient conditions. Substantial efforts were undertaken to produce a highly specular surface on the composite. The overall coefficient of thermal expansion of the composite laminate is near zero, but thermally induced stresses due to micro-movement of the fibers and matrix in relation to each other cause the surface to become nonspecular.

  14. Monitoring and preparation of neoagaro- and agaro-oligosaccharide products by high performance anion exchange chromatography systems.

    PubMed

    Kazłowski, Bartosz; Pan, Chorng Liang; Ko, Yuan Tih

    2015-05-20

    A series of neoagaro-oligosaccharides (NAOS) were prepared by β-agarase digestion and agaro-oligosaccharides (AOS) by HCl hydrolysis from agarose with defined quantity and degree of polymerization (DP). Chain-length distribution in the crude product mixtures were monitored by two high performance anion exchange chromatography systems coupled with a pulsed amperometric detector. Method 1 utilized two separation columns: a CarboPac(™) PA1 and a CarboPac(™) PA100 connected in series and method 2 used the PA100 alone. Method 1 resolved the product in size ranges consisting of DP 1-46 for NAOS and DP 1-32 for AOS. Method 2 clearly resolved saccharide product sizes within DP 26. The optimized system utilizing a semi-preparative CarboPac(™) PA100 column was connected with a fraction collector to isolate and quantify individually separated products. This study established systems for the preparation and qualitative and quantitative measurements as well as for the isolation of various sizes of oligomers generated from agarose.

  15. A high performance system to study the influence of temperature in on-line solid-phase extraction capillary electrophoresis.

    PubMed

    Tascon, Marcos; Benavente, Fernando; Sanz-Nebot, Victoria; Gagliardi, Leonardo G

    2015-03-10

    A novel high performance system to control the temperature of the microcartridge in on-line solid phase extraction capillary electrophoresis (SPE-CE) is introduced. The mini-device consists in a thermostatic bath that fits inside of the cassette of any commercial CE instrument, while its temperature is controlled from an external circuit of liquid connecting three different water baths. The circuits are controlled from a switchboard connected to an array of electrovalves that allow to rapidly alternate the water circulation through the mini-thermostatic-bath between temperatures from 5 to 90 °C. The combination of the mini-device and the forced-air thermostatization system of the commercial CE instrument allows to optimize independently the temperature of the sample loading, the clean-up, the analyte elution and the electrophoretic separation steps. The system is used to study the effect of temperature on the C18-SPE-CE analysis of the opioid peptides, Dynorphin A (Dyn A), Endomorphin1 (END) and Met-enkephalin (MET), in both standard solutions and in spiked plasma samples. Extraction recoveries demonstrated to depend, with a non-monotonous trend, on the microcartridge temperature during the sample loading and became maximum at 60 °C. Results prove the potential of temperature control to further enhance sensitivity in SPE-CE when analytes are thermally stable. Copyright © 2015. Published by Elsevier B.V.

  16. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1999-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high-order characteristics of the system. In this paper only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles of attack: 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of the identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the estimated closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  17. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1996-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA-High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high order characteristics of the system. In this paper, only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles at attack : 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  18. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1999-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high-order characteristics of the system. In this paper only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles of attack: 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of the identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the estimated closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  19. A high performance liquid chromatography system for quantification of hydroxyl radical formation by determination of dihydroxy benzoic acids.

    PubMed

    Owen, R W; Wimonwatwatee, T; Spiegelhalder, B; Bartsch, H

    1996-08-01

    The hypoxanthine/xanthine oxidase enzyme system is known to produce the superoxide ion and hydrogen peroxide during the hydroxylation of hypoxanthine via xanthine to uric acid. When chelated iron is included in this system, superoxide reduces iron (III) to iron(II) and the iron(II)-chelate further reacts with hydrogen peroxide to form the highly reactive hydroxyl radical. Because of the limitations of colourimetric and spectrophotometric techniques by which, to date, the mechanisms of hydroxyl radical formation in the hypoxanthine/xanthine oxidase system have been monitored, a high performance liquid chromatography method utilizing the ion-pair reagent tetrabutylammonium hydroxide and salicylic acid as an aromatic probe for quantification of hydroxyl radical formation was set up. In the hypoxanthine/xanthine oxidase system the major products of hydroxyl radical attack on salicylic acid were 2,5-dihydroxy benzoic acid and 2,3-dihydroxy benzoic acid in the approximate ratio of 5:1. That the hydroxyl radical is involved in the hydroxylation of salicylic acid in this system was demonstrated by the potency especially of dimethyl sulphoxide, butanol and ethanol as scavengers. Phytic acid, which is considered to be an important protective dietary constituent against colorectal cancer, inhibited hydroxylation of salicylic acid at a concentration one order of magnitude lower than the classical scavengers, but was only effective in the absence of EDTA. The method has been applied to the study of free radical generation in faeces, and preliminary results indicate that the faecal flora are able to produce reactive oxygen species in abundance.

  20. Miniaturized ultra-high performance liquid chromatography coupled to electrochemical detection: Investigation of system performance for neurochemical analysis.

    PubMed

    Van Schoors, Jolien; Maes, Katrien; Van Wanseele, Yannick; Broeckhoven, Ken; Van Eeckhaut, Ann

    2016-01-04

    The interest in implementation of miniaturized ultra-high performance liquid chromatography (UHPLC) in neurochemical research is growing because of the need for faster, more selective and more sensitive neurotransmitter analyses. The instrument performance of a tailor designed microbore UHPLC system coupled to electrochemical detection (ECD) is investigated, focusing on the quantitative monoamine determination in in vivo microdialysis samples. The use of a microbore column (1.0mm I.D.) requires miniaturization of the entire instrument, though a balance between extra-column band broadening and injection volume must be considered. This is accomplished through the user defined Performance Optimizing Injection Sequence, whereby 5 μL sample is injected on the column with a measured extra-column variance of 4.5-9.0 μL(2) and only 7 μL sample uptake. Different sub-2 μm and superficially porous particle stationary phases are compared by means of the kinetic plot approach. Peak efficiencies of about 16000-35000 theoretical plates are obtained for the Acquity UPLC BEH C18 column within 13 min analysis time. Furthermore, the coupling to ECD is shown suitable for microbore UHPLC analysis thanks to the miniaturized flow cell design, sufficiently fast data acquisition and mathematical data filtering. Ultimately, injection of in vivo samples demonstrates the applicability of the system for microdialysis analysis.

  1. A film bulk acoustic resonator-based high-performance pressure sensor integrated with temperature control system

    NASA Astrophysics Data System (ADS)

    Zhang, Mengying; Zhao, Zhan; Du, Lidong; Fang, Zhen

    2017-04-01

    This paper presented a high-performance pressure sensor based on a film bulk acoustic resonator (FBAR). The support film of the FBAR chip was made of silicon nitride and the part under the resonator area was etched to enhance the sensitivity and improve the linearity of the pressure sensor. A micro resistor temperature sensor and a micro resistor heater were integrated in the chip to monitor and control the operating temperature. The sensor chip was fabricated, and packaged in an oscillator circuit for differential pressure detection. When the detected pressure ranged from  ‑100 hPa to 600 hPa, the sensitivity of the improved FBAR pressure sensor was  ‑0.967 kHz hPa‑1, namely  ‑0.69 ppm hPa‑1, which was 19% higher than that of existing sensors with a complete support film. The nonlinearity of the improved sensor was less than  ±0.35%, while that of the existing sensor was  ±5%. To eliminate measurement errors from humidity, the temperature control system integrated in the sensor chip controlled the temperature of the resonator up to 75 °C, with accuracy of  ±0.015 °C and power of 20 mW.

  2. Determination of Oxyclozanide in Beef and Milk using High-Performance Liquid Chromatography System with UV Detector.

    PubMed

    Jo, Kyul; Cho, Hee-Jung; Yi, Hee; Cho, Soo-Min; Park, Jin-A; Kwon, Chan-Hyeok; Park, Hee-Ra; Kwon, Ki-Sung; Shin, Ho-Chul

    2011-03-01

    This study was developed and validated for the determination of oxyclozanide residue concentrations in beef and commercial milk, using high-performance liquid chromatography system. Oxyclozanide was successfully separated on a reverse phase column (Xbridge-C(18), 4.6×250 mm, 5 µm) with a mobile phase composed of acetonitrile and 0.1% phosphoric acid (60:40, v/v%). This analytical procedure involved a deproteinization process using acetonitrile for beef and 2% formic acid in acetonitrile for commercial milk, dehydration by adding sodium sulfate to the liquid analytical sample, and a defatting process using n-hexane; after these steps, the extract was exposed to a stream of nitrogen dryness. The final extracted sample was dissolved in the mobile phase and filtered using a 0.45 µm syringe filter. This method had good selectivity and recovery (70.70±7.90-110.79±14.95%) from the matrices. The LOQs ranged from 9.7 to 9.8 µg/kg for beef and commercial milk. The recoveries met the standards set by the CODEX guideline.

  3. Joining of ceramics for high performance energy systems. Mid-term progress report, August 1, 1979-March 31, 1980

    SciTech Connect

    Smeltzer, C E; Metcalfe, A G

    1980-10-06

    The subject program is primarily an exploratory and demonstration study of the use of silicate glass-based adhesives for bonding silicon-base refractory ceramics (SiC, Si/sub 3/N/sub 4/). The projected application is 1250 to 2050/sup 0/F relaxing joint service in high-performance energy conversion systems. The five program tasks and their current status are as follows. Task 1 - Long-Term Joint Stability. Time-temperature-transformation studies of candidate glass adhesives, out to 2000 hours simulated service exposure, are half complete. Task 2 - Environmental and Service Effects on Joint Reliability. Start up delayed due to late delivery of candidate glass fillers and ceramic specimens. Task 3 - Viscoelastic Damping of Glass Bonded Ceramics. Promising results obtained over approximately the same range of glass viscosity required for joint relaxation function (10/sup 7.5/ to 10/sup 9.5/ poise). Work is 90% complete. Task 4 - Crack Arrest and Crack Diversion by Joints. No work started due to late arrival of materials. Task 5 - Improved Joining and Fabrication Methods. Significant work has been conducted in the area of refractory pre-glazing and the application and bonding of high-density candidate glass fillers (by both hand-artisan and slip-spray techniques). Work is half complete.

  4. How high-performance work systems drive health care value: an examination of leading process improvement strategies.

    PubMed

    Robbins, Julie; Garman, Andrew N; Song, Paula H; McAlearney, Ann Scheck

    2012-01-01

    As hospitals focus on increasing health care value, process improvement strategies have proliferated, seemingly faster than the evidence base supporting them. Yet, most process improvement strategies are associated with work practices for which solid evidence does exist. Evaluating improvement strategies in the context of evidence-based work practices can provide guidance about which strategies would work best for a given health care organization. We combined a literature review with analysis of key informant interview data collected from 5 case studies of high-performance work practices (HPWPs) in health care organizations. We explored the link between an evidence-based framework for HPWP use and 3 process improvement strategies: Hardwiring Excellence, Lean/Six Sigma, and Baldrige. We found that each of these process improvement strategies has not only strengths but also important gaps with respect to incorporating HPWPs involving engaging staff, aligning leaders, acquiring and developing talent, and empowering the front line. Given differences among these strategies, our analyses suggest that some may work better than others for individual health care organizations, depending on the organizations' current management systems. In practice, most organizations implementing improvement strategies would benefit from including evidence-based HPWPs to maximize the potential for process improvement strategies to increase value in health care.

  5. [Determination of 61 central nervous system drugs in plasma by protein precipitation-high performance liquid chromatography].

    PubMed

    Zhang, Yin; Chen, Chonghong; Lin, Ling; Chen, Yinong

    2009-11-01

    A method was established for the determination of 61 central nervous system drugs in plasma by using protein precipitation combined with high performance liquid chromatography-diode array detection (HPLC-DAD). A volume of 1.5 mL acetonitrile was added into 1 mL plasma, after vortex, centrifugation and filtration, the supernatant was directly injected into HPLC. The separation was performed on an Agilent TC-C18 column (250 mm x 4.6 mm, 5 microm) with acetonitrile and phosphate buffer solution as mobile phase by gradient elution at a flow rate of 1.5 mL/min. The detection wavelength was 210 nm; full spectra were recorded from 200-364 nm. The recoveries of 61 drugs were larger than 80% with the relative standard deviations (RSDs) ranged from 0.94% to 11.23%. The protein precipitation method is simple, rapid, low-cost with good recoveries, reproducibility and suitable for the general pretreatment of the systematic toxicological analysis (STA) of the 61 drugs.

  6. Synthesis and Characterization of High Performance Polyimides Containing the Bicyclo(2.2.2)oct-7-ene Ring System

    NASA Technical Reports Server (NTRS)

    Alvarado, M.; Harruna, I. I.; Bota, K. B.

    1997-01-01

    Due to the difficulty in processing polyimides with high temperature stability and good solvent resistance, we have synthesized high performance polyimides with bicyclo(2.2.2)-oct-7-ene ring system which can easily be fabricated into films and fibers and subsequently converted to the more stable aromatic polyimides. In order to improve processability, we prepared two polyimides by reacting 1,4-phenylenediamine and 1,3phenylediamine with bicyclo(2.2.2)-7-octene-2,3,5,6-tetracarboxylic dianhydride. The polyimides were characterized by FTIR, FTNMR, solubility and thermal analysis. Thermogravimetric analysis (TGA) showed that the 1,4-phenylenediamine and 1,3-phenylenediamine containing polyimides were stable up to 460 and 379 C, respectively under nitrogen atmosphere. No melting transitions were observed for both polyimides. The 1,4-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, methane sulfonic acid and soluble in sulfuric acid at room temperature. The 1,3-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, tetramethyl urea, N,N-dimethyl acetamide and soluble in methane sulfonic acid and sulfuric acid.

  7. Determination of Oxyclozanide in Beef and Milk using High-Performance Liquid Chromatography System with UV Detector

    PubMed Central

    Jo, Kyul; Cho, Hee-Jung; Yi, Hee; Cho, Soo-Min; Park, Jin-A; Kwon, Chan-Hyeok; Park, Hee-Ra; Kwon, Ki-Sung

    2011-01-01

    This study was developed and validated for the determination of oxyclozanide residue concentrations in beef and commercial milk, using high-performance liquid chromatography system. Oxyclozanide was successfully separated on a reverse phase column (Xbridge-C18, 4.6×250 mm, 5 µm) with a mobile phase composed of acetonitrile and 0.1% phosphoric acid (60:40, v/v%). This analytical procedure involved a deproteinization process using acetonitrile for beef and 2% formic acid in acetonitrile for commercial milk, dehydration by adding sodium sulfate to the liquid analytical sample, and a defatting process using n-hexane; after these steps, the extract was exposed to a stream of nitrogen dryness. The final extracted sample was dissolved in the mobile phase and filtered using a 0.45 µm syringe filter. This method had good selectivity and recovery (70.70±7.90-110.79±14.95%) from the matrices. The LOQs ranged from 9.7 to 9.8 µg/kg for beef and commercial milk. The recoveries met the standards set by the CODEX guideline. PMID:21826158

  8. Synthesis and Characterization of High Performance Polyimides Containing the Bicyclo(2.2.2)oct-7-ene Ring System

    NASA Technical Reports Server (NTRS)

    Alvarado, M.; Harruna, I. I.; Bota, K. B.

    1997-01-01

    Due to the difficulty in processing polyimides with high temperature stability and good solvent resistance, we have synthesized high performance polyimides with bicyclo(2.2.2)-oct-7-ene ring system which can easily be fabricated into films and fibers and subsequently converted to the more stable aromatic polyimides. In order to improve processability, we prepared two polyimides by reacting 1,4-phenylenediamine and 1,3phenylediamine with bicyclo(2.2.2)-7-octene-2,3,5,6-tetracarboxylic dianhydride. The polyimides were characterized by FTIR, FTNMR, solubility and thermal analysis. Thermogravimetric analysis (TGA) showed that the 1,4-phenylenediamine and 1,3-phenylenediamine containing polyimides were stable up to 460 and 379 C, respectively under nitrogen atmosphere. No melting transitions were observed for both polyimides. The 1,4-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, methane sulfonic acid and soluble in sulfuric acid at room temperature. The 1,3-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, tetramethyl urea, N,N-dimethyl acetamide and soluble in methane sulfonic acid and sulfuric acid.

  9. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new

  10. High-Performance SiC/SiC Ceramic Composite Systems Developed for 1315 C (2400 F) Engine Components

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.; Yun, Hee Mann; Morscher, Gregory N.; Bhatt, Ramakrishna T.

    2004-01-01

    As structural materials for hot-section components in advanced aerospace and land-based gas turbine engines, silicon carbide (SiC) ceramic matrix composites reinforced by high performance SiC fibers offer a variety of performance advantages over current bill-of-materials, such as nickel-based superalloys. These advantages are based on the SiC/SiC composites displaying higher temperature capability for a given structural load, lower density (approximately 30- to 50-percent metal density), and lower thermal expansion. These properties should, in turn, result in many important engine benefits, such as reduced component cooling air requirements, simpler component design, reduced support structure weight, improved fuel efficiency, reduced emissions, higher blade frequencies, reduced blade clearances, and higher thrust. Under the NASA Ultra-Efficient Engine Technology (UEET) Project, much progress has been made at the NASA Glenn Research Center in identifying and optimizing two highperformance SiC/SiC composite systems. The table compares typical properties of oxide/oxide panels and SiC/SiC panels formed by the random stacking of balanced 0 degrees/90 degrees fabric pieces reinforced by the indicated fiber types. The Glenn SiC/SiC systems A and B (shaded area of the table) were reinforced by the Sylramic-iBN SiC fiber, which was produced at Glenn by thermal treatment of the commercial Sylramic SiC fiber (Dow Corning, Midland, MI; ref. 2). The treatment process (1) removes boron from the Sylramic fiber, thereby improving fiber creep, rupture, and oxidation resistance and (2) allows the boron to react with nitrogen to form a thin in situ grown BN coating on the fiber surface, thereby providing an oxidation-resistant buffer layer between contacting fibers in the fabric and the final composite. The fabric stacks for all SiC/SiC panels were provided to GE Power Systems Composites for chemical vapor infiltration of Glenn designed BN fiber coatings and conventional SiC matrices

  11. High-performance work systems in health care management, part 2: qualitative evidence from five case studies.

    PubMed

    McAlearney, Ann Scheck; Garman, Andrew N; Song, Paula H; McHugh, Megan; Robbins, Julie; Harrison, Michael I

    2011-01-01

    : A capable workforce is central to the delivery of high-quality care. Research from other industries suggests that the methodical use of evidence-based management practices (also known as high-performance work practices [HPWPs]), such as systematic personnel selection and incentive compensation, serves to attract and retain well-qualified health care staff and that HPWPs may represent an important and underutilized strategy for improving quality of care and patient safety. : The aims of this study were to improve our understanding about the use of HPWPs in health care organizations and to learn about their contribution to quality of care and patient safety improvements. : Guided by a model of HPWPs developed through an extensive literature review and synthesis, we conducted a series of interviews with key informants from five U.S. health care organizations that had been identified based on their exemplary use of HPWPs. We sought to explore the applicability of our model and learn whether and how HPWPs were related to quality and safety. All interviews were recorded, transcribed, and subjected to qualitative analysis. : In each of the five organizations, we found emphasis on all four HPWP subsystems in our conceptual model-engagement, staff acquisition/development, frontline empowerment, and leadership alignment/development. Although some HPWPs were common, there were also practices that were distinctive to a single organization. Our informants reported links between HPWPs and employee outcomes (e.g., turnover and higher satisfaction/engagement) and indicated that HPWPs made important contributions to system- and organization-level outcomes (e.g., improved recruitment, improved ability to address safety concerns, and lower turnover). : These case studies suggest that the systematic use of HPWPs may improve performance in health care organizations and provide examples of how HPWPs can impact quality and safety in health care. Further research is needed to specify

  12. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NASA Astrophysics Data System (ADS)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  13. Methylmercury determination using a hyphenated high performance liquid chromatography ultraviolet cold vapor multipath atomic absorption spectrometry system

    NASA Astrophysics Data System (ADS)

    Campos, Reinaldo C.; Gonçalves, Rodrigo A.; Brandão, Geisamanda P.; Azevedo, Marlo S.; Oliveira, Fabiana; Wasserman, Julio

    2009-06-01

    The present work investigates the use of a multipath cell atomic absorption mercury detector for mercury speciation analysis in a hyphenated high performance liquid chromatography assembly. The multipath absorption cell multiplies the optical path while energy losses are compensated by a very intense primary source. Zeeman-effect background correction compensates for non-specific absorption. For the separation step, the mobile phase consisted in a 0.010% m/v mercaptoethanol solution in 5% methanol (pH = 5), a C 18 column was used as stationary phase, and post column treatment was performed by UV irradiation (60 °C, 13 W). The eluate was then merged with 3 mol L - 1 HCl, reduction was performed by a NaBH 4 solution, and the Hg vapor formed was separated at the gas-liquid separator and carried through a desiccant membrane to the detector. The detector was easily attached to the system, since an external gas flow to the gas-liquid separator was provided. A multivariate approach was used to optimize the procedure and peak area was used for measurement. Instrumental limits of detection of 0.05 µg L - 1 were obtained for ionic (Hg 2+) and HgCH 3+, for an injection volume of 200 µL. The multipath atomic absorption spectrometer proved to be a competitive mercury detector in hyphenated systems in relation to the most commonly used atomic fluorescence and inductively coupled plasma mass spectrometric detectors. Preliminary application studies were performed for the determination of methyl mercury in sediments.

  14. TINA, a new fully automated high-performance droplet freezing assay coupled to a customized infrared detection system

    NASA Astrophysics Data System (ADS)

    Kunert, Anna Theresa; Lamneck, Mark; Gurk, Christian; Helleis, Frank; Klimach, Thomas; Scheel, Jan Frederik; Pöschl, Ulrich; Fröhlich-Nowoisky, Janine

    2017-04-01

    Heterogeneous ice nucleation is frequently investigated by simultaneously cooling a defined number of droplets of equal volume in droplet freezing assays. In 1971, Gabor Vali established the quantitative assessment of ice nuclei active at specific temperatures for many droplet freezing assays. Since then, several instruments have been developed, and various modifications and improvements have been made. However, for quantitative analysis of ice nuclei, the current known droplet freezing assays are still limited by either small droplet numbers, large droplet volumes, inadequate separation of the single droplets, which can result in mutual interferences, or imprecise temperature control within the system. Here, we present the Twin Ice Nucleation Assay (TINA), which represents an improvement of the until now existing droplet freezing assays in terms of temperature range and statistics. Above all, we developed a distinct detection system for freezing events in droplet freezing assays, where the temperature gradient of each single droplet is tracked individually by infrared cameras coupled to a self-written software. In the fully automated setup, ice nucleation can be studied in two independently cooled, customized aluminum blocks run by a high-performance thermostat. We developed a cooling setup, which allows both huge and tiny temperature changes within a very short period of time, combined with an optimal insulation. Hence, measurements can be performed at temperatures down to -55 °C (218 K) and at cooling rates up to 3 K min-1. Besides that, TINA provides the analysis of nearly 1000 droplets per run with various droplet volumes between 1 µL and 50 µL. This enables a fast and more precise analysis of biological samples with complex IN composition as well as better statistics for every sample at the same time.

  15. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted

  16. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted

  17. Potential application of ultra-high performance fiber-reinforced concrete with wet-mix shotcrete system in tunneling

    NASA Astrophysics Data System (ADS)

    Goblet, Valentine Pascale

    In the tunneling industry, shotcrete has been used for several decades. The use of shotcrete or wet-mix spray-on methods allows the application of this method in complex underground profiles and shapes. The need for time efficient spraying methods and constructability for lining coverage opens the door for technologies like steel and synthetic fiber reinforced shotcrete to achieve a uniform and a good quality product. An important advantage of the application of fiber reinforced concrete in shotcrete systems for tunneling is that almost no steel fixing is required. This leads to several other advantages including safer working conditions during excavation, less cost, and higher quality achieved through the use of this new technology. However, there are still some limitations. This research presents an analysis and evaluation of the potential application of a new R&D product, ultra-high-performance fiber-reinforced concrete (UHP-FRC), developed by UTA associate professor Shih-Ho (Simon) Chao. This research will focus on its application to tunnel lining using a wet-mix shotcrete system. The objectives of this study are to evaluate the potential application of UHP-FRC with wet-mix shotcrete equipment. This is the first time UHP-FRC has been used for this purpose; hence, this thesis also presents a preliminary evaluation of the compressive and tensile strength of UHP-FRC after application with shotcrete equipment, and to identify proper shotcrete procedures for mixing and application of UHP-FRC. A test sample was created with the wet-mix shotcrete system for further compressive and tensile strength analysis and a proposed plan was developed on the best way to use the UHP-FRC in lining systems for the tunneling industry. As a result of this study, the viscosity for pumpability was achieved for UHP-FRC. However, the mixer was not fast enough to efficiently mix this material. After 2 days, material strength showed 7,200 psi, however, vertical shotcrete was not achieved

  18. Inverse opal-inspired, nanoscaffold battery separators: a new membrane opportunity for high-performance energy storage systems.

    PubMed

    Kim, Jung-Hwan; Kim, Jeong-Hoon; Choi, Keun-Ho; Yu, Hyung Kyun; Kim, Jong Hun; Lee, Joo Sung; Lee, Sang-Young

    2014-08-13

    The facilitation of ion/electron transport, along with ever-increasing demand for high-energy density, is a key to boosting the development of energy storage systems such as lithium-ion batteries. Among major battery components, separator membranes have not been the center of attention compared to other electrochemically active materials, despite their important roles in allowing ionic flow and preventing electrical contact between electrodes. Here, we present a new class of battery separator based on inverse opal-inspired, seamless nanoscaffold structure ("IO separator"), as an unprecedented membrane opportunity to enable remarkable advances in cell performance far beyond those accessible with conventional battery separators. The IO separator is easily fabricated through one-pot, evaporation-induced self-assembly of colloidal silica nanoparticles in the presence of ultraviolet (UV)-curable triacrylate monomer inside a nonwoven substrate, followed by UV-cross-linking and selective removal of the silica nanoparticle superlattices. The precisely ordered/well-reticulated nanoporous structure of IO separator allows significant improvement in ion transfer toward electrodes. The IO separator-driven facilitation of the ion transport phenomena is expected to play a critical role in the realization of high-performance batteries (in particular, under harsh conditions such as high-mass-loading electrodes, fast charging/discharging, and highly polar liquid electrolyte). Moreover, the IO separator enables the movement of the Ragone plot curves to a more desirable position representing high-energy/high-power density, without tailoring other battery materials and configurations. This study provides a new perspective on battery separators: a paradigm shift from plain porous films to pseudoelectrochemically active nanomembranes that can influence the charge/discharge reaction.

  19. High-performance two-axis gimbal system for free space laser communications onboard unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    Locke, Michael; Czarnomski, Mariusz; Qadir, Ashraf; Setness, Brock; Baer, Nicolai; Meyer, Jennifer; Semke, William H.

    2011-03-01

    A custom designed and manufactured gimbal with a wide field-of-view and fast response time is developed. This enhanced custom design is a 24 volt system with integrated motor controllers and drivers which offers a full 180o fieldof- view in both azimuth and elevation; this provides a more continuous tracking capability as well as increased velocities of up to 479° per second. The addition of active high-frequency vibration control, to complement the passive vibration isolation system, is also in development. The ultimate goal of this research is to achieve affordable, reliable, and secure air-to-air laser communications between two separate remotely piloted aircraft. As a proof-of-concept, the practical implementation of an air-to-ground laserbased video communications payload system flown by a small Unmanned Aerial Vehicle (UAV) will be demonstrated. A numerical tracking algorithm has been written, tested, and used to aim the airborne laser transmitter at a stationary ground-based receiver with known GPS coordinates; however, further refinement of the tracking capabilities is dependent on an improved gimbal design for precision pointing of the airborne laser transmitter. The current gimbal pointing system is a two-axis, commercial-off-the-shelf component, which is limited in both range and velocity. The current design is capable of 360o of pan and 78o of tilt at a velocity of 60o per second. The control algorithm used for aiming the gimbal is executed on a PC-104 format embedded computer onboard the payload to accurately track a stationary ground-based receiver. This algorithm autonomously calculates a line-of-sight vector in real-time by using the UAV autopilot's Differential Global Positioning System (DGPS) which provides latitude, longitude, and altitude and Inertial Measurement Unit (IMU) which provides the roll, pitch, and yaw data, along with the known Global Positioning System (GPS) location of the ground-based photodiode array receiver.

  20. Engineering development of coal-fired high-performance power systems. Technical progress report 1, July through September 1995

    SciTech Connect

    1995-12-01

    In phase 1 of the project, a conceptual design of a coal-fired high performance power system was developed, and small scale R&D was done in critical areas of the design. The current Phase of the project includes development through the pilot plant stage, and design of a prototype plant that would be built in Phase 3. Goals have been identified that relate to the efficiency, emissions, costs and general operation of the system. The base case arrangement of the HIPPS cycle is shown in Figure 1. It is a combined cycle plant. This arrangement is referred to as the All Coal HIPPS because it does not require any other fuels for normal operation. A fluidized bed, air blown pyrolyzer converts coal into fuel gas and char. The char is fired in a high temperature advanced furnace (HITAF) which heats both air for a gas turbine and steam for a steam turbine. The air is heated up to 1400F in the HITAF, and the tube banks for heating air are constructed of alloy tubes. The fuel gas from the pyrolyzer goes to a topping combustor where it is used to raise the air entering the gas turbine to 2350F. In addition in the HITAF, steam duty is achieved with a heat recovery steam generator in the gas turbine exhaust stream and economizers in the HITAF flue gas exhaust stream. An alternative HIPPS cycle is shown in Figure 2. This arrangement uses a ceramic air heater to heat the air to temperatures above what can be achieved with alloy tubes. This arrangement is referred as the 35% natural gas HIPPS. A pyrolyzer is used as in the base case HIPPS, but the fuel gas generated is fired upstream of the ceramic air heater instead of in the topping combustor. Gas turbine air is heated to 1400 F in alloy tubes the same as in the All Coal HIPPS. This air then goes to the ceramic air heater where it is heated further before going to the topping combustor. The temperature of the air leaving the ceramic air heater will depend on technological developments in that component.

  1. Development of a High-Performance Dual-Energy Chest Imaging System: Initial Investigation of Diagnostic Performance

    PubMed Central

    Kashani, H.; Gang, G.J.; Shkumat, N. A.; Varon, C. A.; Yorkston, J.; Van Metter, R.; Paul, N. S.; Siewerdsen, J. H.

    2009-01-01

    Rationale and Objectives To assess the performance of a newly developed dual-energy (DE) chest radiography system in comparison to digital radiography (DR) in the detection and characterization of lung nodules. Materials and Methods An experimental prototype has been developed for high-performance DE chest imaging with total dose equivalent to a single posterior-anterior DR image. Low- and high-kVp projections were used to decompose DE soft-tissue and bone images. A cohort of 55 patients (31 male, 24 female, mean age 65.6 years) was drawn from an ongoing trial involving patients referred for percutaneous CT guided biopsy of suspicious lung nodules. DE and DR images were acquired of each patient prior to biopsy. Image quality was assessed by means of human observer tests involving 5 radiologists independently rating the detection and characterization of lung nodules on a 9-point scale. Results were analyzed in terms of the fraction of cases at or above a given rating, and statistical significance was evaluated from a Wilcoxon signed rank test. Performance was analyzed for all cases pooled as well as by stratification of nodule size, density, lung region, and chest thickness. Results The studies demonstrate a significant performance advantage for DE imaging compared to DR (p<0.001) in the detection and characterization of lung nodules. DE imaging improved the detection of both small and large nodules and exhibited the most significant improvement in regions of the upper lobes, where overlying anatomical noise (ribs and clavicles) are believed to reduce nodule conspicuity in DR. Conclusions DE imaging outperformed DR overall, particularly in the detection of small, solid nodules. DE imaging also performed better in regions dominated by anatomical noise such as the lung apices. The potential for improved nodule detection and characterization at radiation doses equivalent to DR is encouraging and could augment broader utilization of DE imaging. F studies will extend the

  2. Engineering development of coal-fired high performance power systems, Phase II and Phase III. Quarter progress report, April 1, 1996--June 30, 1996

    SciTech Connect

    1996-11-01

    Work is presented on the development of a coal-fired high performance power generation system by the year 2000. This report describes the design of the air heater, duct heater, system controls, slag viscosity, and design of a quench zone.

  3. Direct determination of benzalkonium chloride in ophthalmic systems by reversed-phase high-performance liquid chromatography.

    PubMed

    Ambrus, G; Takahashi, L T; Marty, P A

    1987-02-01

    High-performance liquid chromatography has been used to quantitate benzalkonium chloride (alkylbenzyldimethylammonium chloride) in complex ophthalmic formulations at or below concentration levels of 50 ppm. The method involves a one-step dilution for sample preparation and direct injection; therefore, recovery and/or conversion problems are nonexistent. The assay is quick, specific, reproducible, and simple. This new approach makes routine determinations far simpler than previous methods and is especially useful for product stability studies and quality control procedures.

  4. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  5. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  6. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  7. CLUPI, a high-performance imaging system on the rover of the 2018 mission to discover biofabrics on Mars

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Westall, F.; Hofmann, B. A.; Spray, J. G.; Cockell, C.; Kempe, S.; Griffiths, A. D.; Coradini, A.; Colangeli, L.; Koschny, D.; Pullan, D.; Föllmi, K.; Diamond, L.; Josset, M.; Javaux, E.; Esposito, F.

    2011-10-01

    The scientific objectives of the 2018 ExoMars rover mission are to search for traces of past or present life and to characterise the near-sub surface. Both objectives require study of the rock/regolith materials in terms of structure, textures, mineralogy, and elemental and organic composition. The 2018 ExoMars rover payload consists of a suite of complementary instruments designed to reach these objectives. CLUPI, the high-performance colour close up imager, on board the 2018 ExoMars Rover plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. CLUPI is a powerful, highly integrated miniaturized (<700g) low-power robust imaging system, able to operate at very low temperatures (-120°C). CLUPI has a working distance from 10cm to infinite providing outstanding pictures with a color detector of 2652x1768. At 10cm, the resolution is 7 micrometer/pixel in color. The optical-mechanical interface is a smart assembly in titanium that can sustain a wide temperature range. The concept benefits from well-proven heritage: Proba, Rosetta, MarsExpress and Smart-1 missions… In a typical field scenario, the geologist will use his/her eyes to make an overview of an area and the outcrops within it to determine sites of particular interest for more detailed study. In the ExoMars scenario, the PanCam wide angle cameras (WACS) will be used for this task. After having made a preliminary general evaluation, the geologist will approach a particular outcrop for closer observation of structures at the decimetre to subdecimeter scale (ExoMars' High Resolution Camera) before finally getting very close up to the surface with a hand lens (ExoMars' CLUPI), and/or taking a hand specimen, for detailed observation of textures and minerals. Using structural, textural and preliminary compositional analysis, the geologist identifies the materials and makes a decision as to whether they are of

  8. Use of high-performance computers, FEA and the CAVE automatic virtual environment for collaborative design of complex systems

    SciTech Connect

    Plaskacz, E.J.; Kulak, R.F.

    1996-03-01

    Concurrent, interactive engineering design and analysis has the potential for substantially reducing product development time and enhancing US competitiveness. Traditionally, engineering design has involved running engineering analysis codes to simulate and evaluate the response of a product or process, writing the output data to file, and viewing or ``post-processing`` the results at a later time. The emergence of high-performance computer architectures, virtual reality, and advanced telecommunications in the mid 90`s promises to dramatically alter the way designers, manufacturers, engineers and scientists will do their work.

  9. The choice of the principle of functioning of the system of magnetic levitation for the device of high-performance testing of powder permanent magnets

    NASA Astrophysics Data System (ADS)

    Shaykhutdinov, D. V.; Gorbatenko, N. I.; Narakidze, N. D.; Vlasov, A. S.; Stetsenko, I. A.

    2017-02-01

    The present article focuses on permanent magnets quality control problems. High-performance direct-flow type systems for the mechanical engineering production processes are considered. The main lack of the existing high-performance direct-flow type systems is a completing phase of movement of a tested product when the movement is oscillatory and abrupt braking may be harmful for high fragility samples. A special system for permanent magnets control is offered. The system realizes the magnetic levitation of a test sample. Active correction of the electric current in magnetizing coils as the basic functioning principle of this system is offered. The system provides the required parameters of the movement of the test sample by using opposite connection of magnetizing coils. This new technique provides aperiodic nature of the movement and limited acceleration with saving of high accuracy and required timeframe of the installation in the measuring position.

  10. Fabrication of a high performance acoustic emission (AE) sensor to monitor and diagnose disturbances in HTS tapes and magnet systems

    NASA Astrophysics Data System (ADS)

    Kim, Ju-Hyung; Song, Jung-Bin; Jeong, Young Hun; Lee, Young-Jin; Paik, Jong-Hoo; Kim, Woo-Seok; Lee, Haigun

    2010-02-01

    An acoustic emission (AE) technique was introduced as a non-destructive method to monitor sudden deformation caused by local heat concentrations and micro-cracks within superconductors and superconducting magnets. However, the detection of AE signals in a high temperature superconductor (HTS) tape is not easy because of its low signal to noise ratio caused by the noise from boiling liquid cryogen or mechanical vibration from the cryo-cooler. Therefore, high performance piezoelectric ceramics are needed to improve the sensitivity of the AE sensor. The aim of this study was to improve the piezoelectric and dielectric properties to enhance the performance of an AE sensor. This study examined the effects of Nb2O5 addition (0.0 wt.% to 2.0 wt.%) on the properties of high performance piezoelectric ceramics, Pb(Zr0.54 Ti0.46)O3 + 0.2 wt.% Cr2O3, sintered at 1200 °C for 2 h. The performance was examined with respect to the acoustic emission response of AE sensors manufactured using the specimens with various Nb2O5 contents. Superior sensor performance was obtained for the AE sensors fabricated with the specimens containing 1.0 wt.% to 1.5 wt.% Nb2O5. The performance and characteristics of the AE sensors were in accordance with their piezoelectric and dielectric properties.

  11. Department of Energy Project ER25739 Final Report QoS-Enabled, High-performance Storage Systems for Data-Intensive Scientific Computing

    SciTech Connect

    Rangaswami, Raju

    2009-05-31

    This project's work resulted in the following research projects: (1) BORG - Block-reORGanization for Self-optimizing Storage Systems; (2) ABLE - Active Block Layer Extensions; (3) EXCES - EXternal Caching in Energy-Saving Storage Systems; (4) GRIO - Guaranteed-Rate I/O Scheduler. These projects together help in substantially advancing the over-arching project goal of developing 'QoS-Enabled, High-Performance Storage Systems'.

  12. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  13. Speciation of chromium in environmental samples by dual electromembrane extraction system followed by high performance liquid chromatography.

    PubMed

    Safari, Meysam; Nojavan, Saeed; Davarani, Saied Saeed Hosseiny; Morteza-Najarian, Amin

    2013-07-30

    This study proposes the dual electromembrane extraction followed by high performance liquid chromatography for selective separation-preconcentration of Cr(VI) and Cr(III) in different environmental samples. The method was based on the electrokinetic migration of chromium species toward the electrodes with opposite charge into the two different hollow fibers. The extractant was then complexed with ammonium pyrrolidinedithiocarbamate for HPLC analysis. The effects of analytical parameters including pH, type of organic solvent, sample volume, stirring rate, time of extraction and applied voltage were investigated. The results showed that Cr(III) and Cr(VI) could be simultaneously extracted into the two different hollow fibers. Under optimized conditions, the analytes were quantified by HPLC instrument, with acceptable linearity ranging from 20 to 500 μg L(-1) (R(2) values≥0.9979), and repeatability (RSD) ranging between 9.8% and 13.7% (n=5). Also, preconcentration factors of 21.8-33 that corresponded to recoveries ranging from 31.1% to 47.2% were achieved for Cr(III) and Cr(VI), respectively. The estimated detection limits (S/N ratio of 3:1) were less than 5.4 μg L(-1). Finally, the proposed method was successfully applied to determine Cr(III) and Cr(VI) species in some real water samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  15. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    (LEED®) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

  16. High Performance Processors for Space Environments: A Subproject of the NASA Exploration Missions Systems Directorate "Radiation Hardened Electronics for Space Environments" Technology Development Program

    NASA Technical Reports Server (NTRS)

    Johnson, M.; Label, K.; McCabe, J.; Powell, W.; Bolotin, G.; Kolawa, E.; Ng, T.; Hyde, D.

    2007-01-01

    Implementation of challenging Exploration Systems Missions Directorate objectives and strategies can be constrained by onboard computing capabilities and power efficiencies. The Radiation Hardened Electronics for Space Environments (RHESE) High Performance Processors for Space Environments project will address this challenge by significantly advancing the sustained throughput and processing efficiency of high-per$ormance radiation-hardened processors, targeting delivery of products by the end of FY12.

  17. Development of Ultra-Low Noise, High Performance III-V Quantum Well Infrared Photodetectors (QWIPs) for Focal Plane Array Staring Image Sensor Systems

    DTIC Science & Technology

    1994-05-01

    proposals submitted by various applicants. 5. Mr. Tom Briere of InfraMetrics has contacted Dr. Li, expressing his interest in using our QWIPs in the... InfraMetrics on our new development in QWIP arrays. ś 6. Dr. Li has collaborated with Drs. Bill Beck and John Little of Martin Marietta Lab. (MML...Development of Ultra-Low Noise, High Performance III-V Quantum Well Infrared Photodetectors ( QWIPs ) for Focal Plane Array Staring Image Sensor Systems

  18. Development of Ultra-Low Noise, High Performance III-V Quantum Well Infrared Photodetectors (QWIPs) for Focal Plane Array Staring Image Sensor Systems

    DTIC Science & Technology

    1993-11-01

    submitted by various applicants. 5. Mr. Tom Briere of InfraMetrics has contacted Dr. Li, expressing his interest in using our QWIPs in the infrared imaging... InfraMetrics on our new development in QWIP arrays. 6. Dr. Li has collaborated with Drs. Bill Beck and John Little of Martin Marietta Lab. (MML), in Baltimore...Development of Ultra-Low Noise, High Performance III-V Quantum Well Infrared Photodetectors ( QWIPs )I for Focal Plane Array Staring Image Sensor Systems

  19. Development of Ultra-Low Noise, High Performance III-V Quantum Well Infrared Photodetectors (QWIPs) for Focal Plane Array Staring Image Sensor Systems

    DTIC Science & Technology

    1994-02-06

    InfraMetrics has contacted Dr. Li, expressing his interest in using our QWIPs in the infrared imaging sensor applications. Dr. Li has sent a copy of...his most recent ARPA quarterly progress report to Mr. Briere. Dr. Li will keep in touch with InfraMetrics on our new development in QWIP arrays. 6. Dr...Ultra-Low Noise, High Performance lll-V Quantum Well Infrared Photodetectors ( QWIPs ) for Focal Plane Array Staring Image Sensor Systems i Submitted to i

  20. Identification of high performance and component technology for space electrical power systems for use beyond the year 2000

    NASA Technical Reports Server (NTRS)

    Maisel, James E.

    1988-01-01

    Addressed are some of the space electrical power system technologies that should be developed for the U.S. space program to remain competitive in the 21st century. A brief historical overview of some U.S. manned/unmanned spacecraft power systems is discussed to establish the fact that electrical systems are and will continue to become more sophisticated as the power levels appoach those on the ground. Adaptive/Expert power systems that can function in an extraterrestrial environment will be required to take an appropriate action during electrical faults so that the impact is minimal. Manhours can be reduced significantly by relinquishing tedious routine system component maintenance to the adaptive/expert system. By cataloging component signatures over time this system can set a flag for a premature component failure and thus possibly avoid a major fault. High frequency operation is important if the electrical power system mass is to be cut significantly. High power semiconductor or vacuum switching components will be required to meet future power demands. System mass tradeoffs have been investigated in terms of operating at high temperature, efficiency, voltage regulation, and system reliability. High temperature semiconductors will be required. Silicon carbide materials will operate at a temperature around 1000 K and the diamond material up to 1300 K. The driver for elevated temperature operation is that radiator mass is reduced significantly because of inverse temperature to the fourth power.

  1. Use of a urea and guanidine-HCl-propanol solvent system to purify a growth inhibitory glycopeptide by high-performance liquid chromatography.

    PubMed

    Sharifi, B G; Bascom, C C; Khurana, V K; Johnson, T C

    1985-05-17

    Reversed-phase high-performance liquid chromatography was used to purify an inhibitory glycopeptide where resolution and recovery were enhanced by using urea or guanidine-HCl-isopropanol-water as a solvent system. Isopropanol alone or other solvent systems that have been proposed for such purification steps were not effective in eluting hydrophobic proteins from the reversed-phase column. The application of the urea or guanidine-HCl solvent systems in the separation and purification of membrane proteins, and other hydrophobic macromolecules, could greatly enhance recovery and efficiency of purification.

  2. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.

  3. An open, parallel I/O computer as the platform for high-performance, high-capacity mass storage systems

    NASA Technical Reports Server (NTRS)

    Abineri, Adrian; Chen, Y. P.

    1992-01-01

    APTEC Computer Systems is a Portland, Oregon based manufacturer of I/O computers. APTEC's work in the context of high density storage media is on programs requiring real-time data capture with low latency processing and storage requirements. An example of APTEC's work in this area is the Loral/Space Telescope-Data Archival and Distribution System. This is an existing Loral AeroSys designed system, which utilizes an APTEC I/O computer. The key attributes of a system architecture that is suitable for this environment are as follows: (1) data acquisition alternatives; (2) a wide range of supported mass storage devices; (3) data processing options; (4) data availability through standard network connections; and (5) an overall system architecture (hardware and software designed for high bandwidth and low latency). APTEC's approach is outlined in this document.

  4. Use of High Resolution DAQ System to Aid Diagnosis of HD2b, a High Performance Nb3Sn Dipole

    SciTech Connect

    Lizarazo, J.; Doering, D.; Doolittle, L.; Galvin, J.; Caspi, S.; Dietderich, D. R.; Felice, H.; Ferracin, P.; Godeke, A.; Joseph, J.; Lietzke, A. F.; Ratti, A.; Sabbi, G. L.; Trillaud, F.; Wang, X.; Zimmerman, S.

    2008-08-17

    A novel voltage monitoring system to record voltage transients in superconducting magnets is being developed at LBNL. This system has 160 monitoring channels capable of measuring differential voltages of up to 1.5kV with 100kHz bandwidth and 500kS/s digitizing rate. This paper presents analysis results from data taken with a 16 channel prototype system. From that analysis we were able to diagnose a change in the current-temperature margin of the superconducting cable by analyzing Flux-Jump data collected after a magnet energy extraction failure during testing of a high field Nb{sub 3}Sn dipole.

  5. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples.

  6. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column.

    PubMed

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples.

  7. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  8. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  9. Report of the Defense Science Board 1981 Summer Study Panel on Operational Readiness with High Performance Systems

    DTIC Science & Technology

    1982-04-01

    deci- sion to employ automated fault detection and isolation may permit more effective system operation with less skilled personnel. However, if poorly...personnel or building the fault detection and isolation system over again. There are many other choices which must be made early in a program and...what trainin&, required to support the concept? If sophisticated fault detection and isolation techniques are to be used, what demands will be placed

  10. A high-performance ultrasonic system for the simultaneous transmission of data and power through solid metal barriers.

    PubMed

    Lawry, Tristan J; Wilt, Kyle R; Ashdown, Jon D; Scarton, Henry A; Saulnier, Gary J

    2013-01-01

    This paper presents a system capable of simultaneous high-power and high-data-rate transmission through solid metal barriers using ultrasound. By coaxially aligning a pair of piezoelectric transducers on opposite sides of a metal wall and acoustically coupling them to the barrier, an acoustic- electric transmission channel is formed which prevents the need for physical penetration. Independent data and power channels are utilized, but they are only separated by 25.4 mm to reduce the system's form factor. Commercial off-the-shelf components and evaluation boards are used to create realtime prototype hardware and the full system is capable of transmitting data at 17.37 Mbps and delivering 50 W of power through a 63.5-mm thick steel wall. A synchronous multi-carrier communication scheme (OFDM) is used to achieve a very high spectral efficiency and to ensure that there is only minor interference between the power and data channels. Also presented is a discussion of potential enhancements that could be made to greatly improve the power and data-rate capabilities of the system. This system could have a tremendous impact on improving safety and preserving structural integrity in many military applications (submarines, surface ships, unmanned undersea vehicles, armored vehicles, planes, etc.) as well as in a wide range of commercial, industrial, and nuclear systems.

  11. High-performance computer aided detection system for polyp detection in CT colonography with fluid and fecal tagging

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Wang, Shijun; Kabadi, Suraj; Summers, Ronald M.

    2009-02-01

    CT colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer screening. Computer-aided detection (CAD) of polyps has improved consistency and sensitivity of virtual colonoscopy interpretation and reduced interpretation burden. A CAD system typically consists of four stages: (1) image preprocessing including colon segmentation; (2) initial detection generation; (3) feature selection; and (4) detection classification. In our experience, three existing problems limit the performance of our current CAD system. First, highdensity orally administered contrast agents in fecal-tagging CTC have scatter effects on neighboring tissues. The scattering manifests itself as an artificial elevation in the observed CT attenuation values of the neighboring tissues. This pseudo-enhancement phenomenon presents a problem for the application of computer-aided polyp detection, especially when polyps are submerged in the contrast agents. Second, general kernel approach for surface curvature computation in the second stage of our CAD system could yield erroneous results for thin structures such as small (6-9 mm) polyps and for touching structures such as polyps that lie on haustral folds. Those erroneous curvatures will reduce the sensitivity of polyp detection. The third problem is that more than 150 features are selected from each polyp candidate in the third stage of our CAD system. These high dimensional features make it difficult to learn a good decision boundary for detection classification and reduce the accuracy of predictions. Therefore, an improved CAD system for polyp detection in CTC data is proposed by introducing three new techniques. First, a scale-based scatter correction algorithm is applied to reduce pseudo-enhancement effects in the image pre-processing stage. Second, a cubic spline interpolation method is utilized to accurately estimate curvatures for initial detection generation. Third, a new dimensionality

  12. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  13. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  14. Interactive image-guided surgery system with high-performance computing capabilities on low-cost workstations: a prototype.

    PubMed

    Roldan, P; Barcia-Salorio, J L; Talamantes, F; Alcañiz, M; Grau, V; Monserrat, C; Juan, C

    1999-01-01

    We present a new frameless stereotatic system prototype that has been initially validated in functional neurosurgery operations and that makes use of an optical position tracker for image-guided neurosurgery. Several devices for tracking different surgical instruments have been designed and manufactured. These devices include an array of infrared light-emitting diodes that are tracked by three charge-coupled device cameras. The system presents several new approaches for surgery planning. For high-quality 3D images of the patient's anatomy, we have developed a parallel version of a volume-rendering algorithm, thus enabling real-time 3D anatomy manipulation on low-cost PC workstations. In order to test the accuracy of the system, the localization of the target by means of a stereotatic frame has been compared with frameless techniques, obtaining a difference of about 1 +/- 1 mm. Copyright 2000 S. Karger AG, Basel

  15. Low cost, high performance white-light fiber-optic hydrophone system with a trackable working point.

    PubMed

    Ma, Jinyu; Zhao, Meirong; Huang, Xinjing; Bae, Hyungdae; Chen, Yongyao; Yu, Miao

    2016-08-22

    A working-point trackable fiber-optic hydrophone with high acoustic resolution is proposed and experimentally demonstrated. The sensor is based on a polydimethylsiloxane (PDMS) cavity molded at the end of a single-mode fiber, acting as a low-finesse Fabry-Perot (FP) interferometer. The working point tracking is achieved by using a low cost white-light interferometric system with a simple tunable FP filter. By real-time adjusting the optical path difference of the FP filter, the sensor working point can be kept at its highest sensitivity point. This helps address the sensor working point drift due to hydrostatic pressure, water absorption, and/or temperature changes. It is demonstrated that the sensor system has a high resolution with a minimum detectable acoustic pressure of 148 Pa and superior stability compared to a system using a tunable laser.

  16. Accurate and high-performance 3D position measurement of fiducial marks by stereoscopic system for railway track inspection

    NASA Astrophysics Data System (ADS)

    Gorbachev, Alexey A.; Serikova, Mariya G.; Pantyushina, Ekaterina N.; Volkova, Daria A.

    2016-04-01

    Modern demands for railway track measurements require high accuracy (about 2-5 mm) of rails placement along the track to ensure smooth, safe and fast transportation. As a mean for railways geometry measurements we suggest a stereoscopic system which measures 3D position of fiducial marks arranged along the track by image processing algorithms. The system accuracy was verified during laboratory tests by comparison with precise laser tracker indications. The accuracy of +/-1.5 mm within a measurement volume 150×400×5000 mm was achieved during the tests. This confirmed that the stereoscopic system demonstrates good measurement accuracy and can be potentially used as fully automated mean for railway track inspection.

  17. A new high-performance heterologous fungal expression system based on regulatory elements from the Aspergillus terreus terrein gene cluster.

    PubMed

    Gressler, Markus; Hortschansky, Peter; Geib, Elena; Brock, Matthias

    2015-01-01

    Recently, the Aspergillus terreus terrein gene cluster was identified and selected for development of a new heterologous expression system. The cluster encodes the specific transcription factor TerR that is indispensable for terrein cluster induction. To identify TerR binding sites, different recombinant versions of the TerR DNA-binding domain were analyzed for specific motif recognition. The high affinity consensus motif TCGGHHWYHCGGH was identified from genes required for terrein production and binding site mutations confirmed their essential contribution to gene expression in A. terreus. A combination of TerR with its terA target promoter was tested as recombinant expression system in the heterologous host Aspergillus niger. TerR mediated target promoter activation was directly dependent on its transcription level. Therefore, terR was expressed under control of the regulatable amylase promoter PamyB and the resulting activation of the terA target promoter was compared with activation levels obtained from direct expression of reporters from the strong gpdA control promoter. Here, the coupled system outcompeted the direct expression system. When the coupled system was used for heterologous polyketide synthase expression high metabolite levels were produced. Additionally, expression of the Aspergillus nidulans polyketide synthase gene orsA revealed lecanoric acid rather than orsellinic acid as major polyketide synthase product. Domain swapping experiments assigned this depside formation from orsellinic acid to the OrsA thioesterase domain. These experiments confirm the suitability of the expression system especially for high-level metabolite production in heterologous hosts.

  18. A new high-performance heterologous fungal expression system based on regulatory elements from the Aspergillus terreus terrein gene cluster

    PubMed Central

    Gressler, Markus; Hortschansky, Peter; Geib, Elena; Brock, Matthias

    2015-01-01

    Recently, the Aspergillus terreus terrein gene cluster was identified and selected for development of a new heterologous expression system. The cluster encodes the specific transcription factor TerR that is indispensable for terrein cluster induction. To identify TerR binding sites, different recombinant versions of the TerR DNA-binding domain were analyzed for specific motif recognition. The high affinity consensus motif TCGGHHWYHCGGH was identified from genes required for terrein production and binding site mutations confirmed their essential contribution to gene expression in A. terreus. A combination of TerR with its terA target promoter was tested as recombinant expression system in the heterologous host Aspergillus niger. TerR mediated target promoter activation was directly dependent on its transcription level. Therefore, terR was expressed under control of the regulatable amylase promoter PamyB and the resulting activation of the terA target promoter was compared with activation levels obtained from direct expression of reporters from the strong gpdA control promoter. Here, the coupled system outcompeted the direct expression system. When the coupled system was used for heterologous polyketide synthase expression high metabolite levels were produced. Additionally, expression of the Aspergillus nidulans polyketide synthase gene orsA revealed lecanoric acid rather than orsellinic acid as major polyketide synthase product. Domain swapping experiments assigned this depside formation from orsellinic acid to the OrsA thioesterase domain. These experiments confirm the suitability of the expression system especially for high-level metabolite production in heterologous hosts. PMID:25852654

  19. Design of a high-performance slide and drive system for a small precision machining research lathe

    SciTech Connect

    Donaldson, R.R.; Maddux, A.S.

    1984-03-01

    The development of high-accuracy machine tools, principally through interest in diamond turning, plus the availability of new cutting tool materials, offers the possibility of improving workpiece accuracy for a much larger variety of materials than that addressed by diamond tools. This paper describes the design and measured performance of a slideway and servo-drive system for a small lathe intended as a tool for research on the above subject, with emphasis on the servo-control design. The slide system provides high accuracy and stiffness over a travel of 100 mm, utilizing oil hydrostatic bearings and a capstan roller drive with integral dc motor and tachometer.

  20. Imagining School Autonomy in High-Performing Education Systems: East Asia as a Source of Policy Referencing in England

    ERIC Educational Resources Information Center

    You, Yun; Morris, Paul

    2016-01-01

    Education reform is increasingly based on emulating the features of "world-class" systems that top international attainment surveys and, in England specifically, East Asia is referenced as the "inspiration" for their education reforms. However, the extent to which the features identified by the UK Government accord with the…

  1. Process innovation in high-performance systems: From polymeric composites R&D to design and build of airplane showers

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Jui

    In the aerospace industry reducing aircraft weight is key because it increases flight performance and drives down operating costs. With fierce competition in the commercial aircraft industry, companies that focused primarily on exterior aircraft performance design issues are turning more attention to the design of aircraft interior. Simultaneously, there has been an increase in the number of new amenities offered to passengers especially in first class travel and executive jets. These new amenities present novel and challenging design parameters that include integration into existing aircraft systems without sacrificing flight performance. The objective of this study was to design a re-circulating shower system for an aircraft that weighs significantly less than pre-existing shower designs. This was accomplished by integrating processes from polymeric composite materials, water filtration, and project management. Carbon/epoxy laminates exposed to hygrothermal cycling conditions were evaluated and compared to model calculations. Novel materials and a variety of fabrication processes were developed to create new types of paper for honeycomb applications. Experiments were then performed on the properties and honeycomb processability of these new papers. Standard water quality tests were performed on samples taken from the re-circulating system to see if current regulatory standards were being met. These studies were executed and integrated with tools from project management to design a better shower system for commercial aircraft applications.

  2. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems

    PubMed Central

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-01

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN’s MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS. PMID:28134853

  3. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    PubMed

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  4. A low-cost gradient system for high-performance liquid chromatography. Quantitation of complex pharmaceutical raw materials.

    PubMed

    Erni, F; Frei, R W

    1976-09-29

    A device is described that makes use of an eight-port motor valve to generate step gradients on the low-pressure side of a piston pump with a low dead volume. Such a gradient device with an automatic control unit, which also permits repetition of previous steps, can be built for about half the cost of a gradient system with two pumps. Applications of this gradient unit to the separation of complex mixtures of glycosides and alkaloids are discussed and compared with separations systems using two high-pressure pumps. The gradients that are used on reversed-phase material with solvent mixtures of water and completely miscible organic solvents are suitable for quantitative routine control of pharmaceutical products. The reproducibility of retention data is excellent over several months and, with the use of loop injectors, major components can be determined quantitatively with a reproducibility of better than 2% (relative standard deviation). The step gradient selector valve can also be used as an introduction system for very large sample volumes. Up to 11 can be injected and samples with concentrations of less than 1 ppb can be determined with good reproducibilities.

  5. Reversed-phase systems for the analysis of catecholamines and related compounds by high-performance liquid chromatography.

    PubMed

    Crombeen, J P; Kraak, J C; Poppe, H

    1978-12-21

    Phase systems using alkyl-modified silica as an absorbent, used as much and as a support for dynamically coated ion exchangers, were investigated for their capability in separating catecholamines and related compounds. Simple reversed-phase adsorption chromatography with C8-bonded silica is not able to separate these compounds very well because of (i) the very small retention of the more basic compounds in circumstances where the acidic compounds are well separated, (ii) bad peak shapes and (iii) low column efficiences, although the last drawback can be circumvented by the addition of inorganic anions to the eluent. The addition of a dynamically coated cation exchanger, sodium dodecylsulphate (SDS), to the eluent not only brings about drastic changes in the selectivity, but also makes available an additional degree of freedom for influencing the selectivity. The retention of the basic solutes increases upon addition of SDS and the retention becomes inversely proportional to the counter ion (Na+) concentration. Further, it was found that columns previously loaded with SDS can be used with SDS-free eluents when a pre-column, loaded with SDS, is used or with eluents containing a very small amount of SDS (less than 0.001%, w/v). These SDS-coated phase systems behave similarly to phase systems containing SDS in the eluent and show a better column stability and UV background.

  6. Comparison of ultrasonic and thermospray systems for high performance sample introduction to inductively coupled plasma atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Conver, Timothy S.; Koropchak, John A.

    1995-06-01

    This paper describes detailed work done in our lab to compare analytical figures of merit for pneumatic, ultrasonic and thermospray sample introduction (SI) systems with three different inductively coupled plasma-atomic emission spectrometry (ICP-AES) instruments. One instrument from Leeman Labs, Inc. has an air path echelle spectrometer and a 27 MHz ICP. For low dissolved solid samples with this instrument, we observed that the ultrasonic nebulizer (USN) and fused silica aperture thermospray (FSApT) both offered similar LOD improvements as compared to pneumatic nebulization (PN), 14 and 16 times, respectively. Average sensitivities compared to PN were better for the USN, by 58 times, compared to 39 times for the FSApT. For solutions containing high dissolved solids we observed that FSApT optimized at the same conditions as for low dissolved solids, whereas USN required changes in power and gas flows to maintain a stable discharge. These changes degraded the LODs for USN substantially as compared to those utilized for low dissolved solid solutions, limiting improvement compared to PN to an average factor of 4. In general, sensitivities for USN were degraded at these new conditions. When solutions with 3000 μg/g Ca were analyzed, LOD improvements were smaller for FSApT and USN, but FSApT showed an improvement over USN of 6.5 times. Sensitivities compared to solutions without high dissolved solids were degraded by 19% on average for FSApT, while those for USN were degraded by 26%. The SI systems were also tested with a Varian Instruments Liberty 220 having a vacuum path Czerny-Turner monochromator and a 40 MHz generator. The sensitivities with low dissolved solids solutions compared to PN were 20 times better for the USN and 39 times better for FSApT, and LODs for every element were better for FSApT. Better correlation between relative sensitivities and anticipated relative analyte mass fluxes for FSApT and USN was observed with the Varian instrument. LOD

  7. Making resonance a common case: a high-performance implementation of collective I/O on parallel file systems

    SciTech Connect

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2009-01-01

    Collective I/O is a widely used technique to improve I/O performance in parallel computing. It can be implemented as a client-based or server-based scheme. The client-based implementation is more widely adopted in MPI-IO software such as ROMIO because of its independence from the storage system configuration and its greater portability. However, existing implementations of client-side collective I/O do not take into account the actual pattern offile striping over multiple I/O nodes in the storage system. This can cause a significant number of requests for non-sequential data at I/O nodes, substantially degrading I/O performance. Investigating the surprisingly high I/O throughput achieved when there is an accidental match between a particular request pattern and the data striping pattern on the I/O nodes, we reveal the resonance phenomenon as the cause. Exploiting readily available information on data striping from the metadata server in popular file systems such as PVFS2 and Lustre, we design a new collective I/O implementation technique, resonant I/O, that makes resonance a common case. Resonant I/O rearranges requests from multiple MPI processes to transform non-sequential data accesses on I/O nodes into sequential accesses, significantly improving I/O performance without compromising the independence ofa client-based implementation. We have implemented our design in ROMIO. Our experimental results show that the scheme can increase I/O throughput for some commonly used parallel I/O benchmarks such as mpi-io-test and ior-mpi-io over the existing implementation of ROMIO by up to 157%, with no scenario demonstrating significantly decreased performance.

  8. High-Performance Consensus Control in Networked Systems With Limited Bandwidth Communication and Time-Varying Directed Topologies.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang

    2016-02-08

    Communication data rates and energy constraints are two important factors that have to be considered in the coordination control of multiagent networks. Although some encoder-decoder-based consensus protocols are available, there still exists a fundamental theoretical problem: how can we further reduce the update rate of control input for each agent without the changing consensus performance? In this paper, we consider the problem of average consensus over directed and time-varying digital networks of discrete-time first-order multiagent systems with limited communication data transmission rates. Each agent has a real-valued state but can only exchange binary symbolic sequence with its neighbors due to bandwidth constraints. A class of novel event-triggered dynamic encoding and decoding algorithms is proposed, based on which a kind of consensus protocol is presented. Moreover, we develop a scheme to select the numbers of time-varying quantization levels for each connected communication channel in the time-varying directed topologies at each time step. The analytical relation among system and network parameters is characterized explicitly. It is shown that the asymptotic convergence rate is related to the scale of the network, the number of quantization levels, the system parameter, and the network structure. It is also found that under the designed event-triggered protocol, for a directed and time-varying digital network, which uniformly contains a spanning tree over a time interval, the average consensus can be achieved with an exponential convergence rate based on merely 1-b information exchange between each pair of adjacent agents at each time step.

  9. Aqueous biphasic systems containing PEG-based deep eutectic solvents for high-performance partitioning of RNA.

    PubMed

    Zhang, Hongmei; Wang, Yuzhi; Zhou, Yigang; Xu, Kaijia; Li, Na; Wen, Qian; Yang, Qin

    2017-08-01

    In this work, 16 kinds of novel deep eutectic solvents (DESs) composed of polyethylene glycol (PEG) and quaternary ammonium salts, were coupled with Aqueous Biphasic Systems (ABSs) to extract RNA. The phase forming ability of ABSs were comprehensively evaluated, involving the effects of various proportions of DESs' components, carbon chain length and anions species of quaternary ammonium salts, average molecular weights of PEG and inorganic salts nature. Then the systems were applied in RNA extraction, and the results revealed that the extraction efficiency values were distinctly enhanced by relatively lower PEG content in DESs, smaller PEG molecular weights, longer carbon chain of quaternary ammonium salts and more hydrophobic inorganic salts. Then the systems composed of [TBAB][PEG600] and Na2SO4 were utilized in the influence factor experiments, proving that the electrostatic interaction was the dominant force for RNA extraction. Therefore, back-extraction efficiency values ranging between 85.19% and 90.78% were obtained by adjusting the ionic strength. Besides, the selective separation of RNA and tryptophane (Trp) was successfully accomplished. It was found that 86.19% RNA was distributed in the bottom phase, while 72.02% Trp was enriched in the top phase in the novel ABSs. Finally, dynamic light scattering (DLS) and transmission electron microscope (TEM) were used to further investigate the extraction mechanism. The proposed method reveals the outstanding feasibility of the newly developed ABSs formed by PEG-based DESs and inorganic salts for the green extraction of RNA. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Compensation of Wave-Induced Motion and Force Phenomena for Ship-Based High Performance Robotic and Human Amplifying Systems

    SciTech Connect

    Love, LJL

    2003-09-24

    The decrease in manpower and increase in material handling needs on many Naval vessels provides the motivation to explore the modeling and control of Naval robotic and robotic assistive devices. This report addresses the design, modeling, control and analysis of position and force controlled robotic systems operating on the deck of a moving ship. First we provide background information that quantifies the motion of the ship, both in terms of frequency and amplitude. We then formulate the motion of the ship in terms of homogeneous transforms. This transformation provides a link between the motion of the ship and the base of a manipulator. We model the kinematics of a manipulator as a serial extension of the ship motion. We then show how to use these transforms to formulate the kinetic and potential energy of a general, multi-degree of freedom manipulator moving on a ship. As a demonstration, we consider two examples: a one degree-of-freedom system experiencing three sea states operating in a plane to verify the methodology and a 3 degree of freedom system experiencing all six degrees of ship motion to illustrate the ease of computation and complexity of the solution. The first series of simulations explore the impact wave motion has on tracking performance of a position controlled robot. We provide a preliminary comparison between conventional linear control and Repetitive Learning Control (RLC) and show how fixed time delay RLC breaks down due to the varying nature wave disturbance frequency. Next, we explore the impact wave motion disturbances have on Human Amplification Technology (HAT). We begin with a description of the traditional HAT control methodology. Simulations show that the motion of the base of the robot, due to ship motion, generates disturbances forces reflected to the operator that significantly degrade the positioning accuracy and resolution at higher sea states. As with position-controlled manipulators, augmenting the control with a Repetitive

  11. MO-G-17A-01: Innovative High-Performance PET Imaging System for Preclinical Imaging and Translational Researches

    SciTech Connect

    Sun, X; Lou, K; Deng, Z; Shao, Y

    2014-06-15

    Purpose: To develop a practical and compact preclinical PET with innovative technologies for substantially improved imaging performance required for the advanced imaging applications. Methods: Several key components of detector, readout electronics and data acquisition have been developed and evaluated for achieving leapfrogged imaging performance over a prototype animal PET we had developed. The new detector module consists of an 8×8 array of 1.5×1.5×30 mm{sup 3} LYSO scintillators with each end coupled to a latest 4×4 array of 3×3 mm{sup 2} Silicon Photomultipliers (with ∼0.2 mm insensitive gap between pixels) through a 2.0 mm thick transparent light spreader. Scintillator surface and reflector/coupling were designed and fabricated to reserve air-gap to achieve higher depth-of-interaction (DOI) resolution and other detector performance. Front-end readout electronics with upgraded 16-ch ASIC was newly developed and tested, so as the compact and high density FPGA based data acquisition and transfer system targeting 10M/s coincidence counting rate with low power consumption. The new detector module performance of energy, timing and DOI resolutions with the data acquisition system were evaluated. Initial Na-22 point source image was acquired with 2 rotating detectors to assess the system imaging capability. Results: No insensitive gaps at the detector edge and thus it is capable for tiling to a large-scale detector panel. All 64 crystals inside the detector were clearly separated from a flood-source image. Measured energy, timing, and DOI resolutions are around 17%, 2.7 ns and 1.96 mm (mean value). Point source image is acquired successfully without detector/electronics calibration and data correction. Conclusion: Newly developed advanced detector and readout electronics will be enable achieving targeted scalable and compact PET system in stationary configuration with >15% sensitivity, ∼1.3 mm uniform imaging resolution, and fast acquisition counting rate

  12. High performance nuclear thermal propulsion system for near term exploration missions to 100 A.U. and beyond

    NASA Astrophysics Data System (ADS)

    Powell, James R.; Paniagua, John; Maise, George; Ludewig, Hans; Todosow, Michael

    1999-05-01

    A new compact ultra light nuclear reactor engine design termed MITEE (MIniature Reac Tor EnginE) is described. MITEE heats hydrogen propellant to 3000 K, achieving a specific impulse of 1000 seconds and a thrust-to-weight of 10. Total engine mass is 200 kg, including reactor, pump, auxiliaries and a 30% contingency. MITEE enables many types of new and unique missions to the outer solar system not possible with chemical engines. Examples include missions to 100 A.U. in less than 10 years, flybys of Pluto in 5 years, sample return from Pluto and the moons of the outer planets, unlimited ramjet flight in planetary atmospheres, etc. Much of the necessary technology for MITEE already exists as a result of previous nuclear rocket development programs. With some additional development, initial MITEE missions could begin in only 6 years.

  13. Ultra-high performance mirror systems for the imaging and coherence beamline I13 at the Diamond Light Source

    NASA Astrophysics Data System (ADS)

    Wagner, U. H.; Alcock, S.; Ludbrook, G.; Wiatryzk, J.; Rau, C.

    2012-05-01

    I13L is a 250m long hard x-ray beamline (6 keV to 35 keV) currently under construction at the Diamond Light Source. The beamline comprises of two independent experimental endstations: one for imaging in direct space using x-ray microscopy and one for imaging in reciprocal space using coherent diffraction based imaging techniques. To minimise the impact of thermal fluctuations and vibrations onto the beamline performance, we are developing a new generation of ultra-stable beamline instrumentation with highly repeatable adjustment mechanisms using low thermal expansion materials like granite and large piezo-driven flexure stages. For minimising the beam distortion we use very high quality optical components like large ion-beam polished mirrors. In this paper we present the first metrology results on a newly designed mirror system following this design philosophy.

  14. High-performance work systems in health care management, part 1: development of an evidence-informed model.

    PubMed

    Garman, Andrew N; McAlearney, Ann Scheck; Harrison, Michael I; Song, Paula H; McHugh, Megan

    2011-01-01

    : Although management practices are recognized as important factors in improving health care quality and efficiency, most research thus far has focused on individual practices, ignoring or underspecifying the contexts within which these practices are operating. Research from other industries, which has increasingly focused on systems rather than individual practices, has yielded results that may benefit health services management. : Our goal was to develop a conceptual model on the basis of prior research from health care as well as other industries that could be used to inform important contextual considerations within health care. : Using theoretical frameworks from A. Donabedian (1966), P. M. Wright, T. M. Gardner, and L. M. Moynihan (2003), and B. Schneider, D. B. Smith, and H. W. Goldstein (2000) and review methods adapted from R. Pawson (2006b), we reviewed relevant research from peer-reviewed and other industry-relevant sources to inform our model. The model we developed was then reviewed with a panel of practitioners, including experts in quality and human resource management, to assess the applicability of the model to health care settings. : The resulting conceptual model identified four practice bundles, comprising 14 management practices as well as nine factors influencing adoption and perceived sustainability of these practices. The mechanisms by which these practices influence care outcomes are illustrated using the example of hospital-acquired infections. In addition, limitations of the current evidence base are discussed, and an agenda for future research in health care settings is outlined. : Results may help practitioners better conceptualize management practices as part of a broader system of work practices. This may, in turn, help practitioners to prioritize management improvement efforts more systematically.

  15. High performance liquid level monitoring system based on polymer fiber Bragg gratings embedded in silicone rubber diaphragms

    NASA Astrophysics Data System (ADS)

    Marques, Carlos A. F.; Peng, Gang-Ding; Webb, David J.

    2015-05-01

    Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using

  16. Evolution of high-performance swimming in sharks: transformations of the musculotendinous system from subcarangiform to thunniform swimmers.

    PubMed

    Gemballa, Sven; Konstantinidis, Peter; Donley, Jeanine M; Sepulveda, Chugey; Shadwick, Robert E

    2006-04-01

    In contrast to all other sharks, lamnid sharks perform a specialized fast and continuous "thunniform" type of locomotion, more similar to that of tunas than to any other known shark or bony fish. Within sharks, it has evolved from a subcarangiform mode. Experimental data show that the two swimming modes in sharks differ remarkably in kinematic patterns as well as in muscle activation patterns, but the morphology of the underlying musculotendinous system (red muscles and myosepta) that drives continuous locomotion remains largely unknown. The goal of this study was to identify differences in the musculotendinous system of the two swimming types and to evaluate these differences in an evolutionary context. Three subcarangiform sharks (the velvet belly lantern shark, Etmopterus spinax, the smallspotted catshark, Scyliorhinus canicula, and the blackmouth catshark, Galeus melanostomus) from the two major clades (two galeans, one squalean) and one lamnid shark, the shortfin mako, Isurus oxyrhinchus, were compared with respect to 1) the 3D shape of myomeres and myosepta of different body positions; 2) the tendinous architecture (collagenous fiber pathways) of myosepta from different body positions; and 3) the association of red muscles with myoseptal tendons. Results show that the three subcarangiform sharks are morphologically similar but differ remarkably from the lamnid condition. Moreover, the "subcarangiform" morphology is similar to the condition known from teleostomes. Thus, major features of the "subcarangiform" condition in sharks have evolved early in gnathostome history: Myosepta have one main anterior-pointing cone and two posterior-pointing cones that project into the musculature. Within a single myoseptum cones are connected by longitudinally oriented tendons (the hypaxial and epaxial lateral and myorhabdoid tendons). Mediolaterally oriented tendons (epineural and epipleural tendons; mediolateral fibers) connect vertebral axis and skin. An individual lateral

  17. High performance seizure-monitoring system using a vibration sensor and videotape recording: behavioral analysis of genetically epileptic rats.

    PubMed

    Amano, S; Yokoyama, M; Torii, R; Fukuoka, J; Tanaka, K; Ihara, N; Hazama, F

    1997-06-01

    A new seizure-monitoring apparatus containing a piezoceramic vibration sensor combined with videotape recording was developed. Behavioral analysis of Ihara's genetically epileptic rat (IGER), which is a recently developed novel mutant with spontaneously limbic-like seizures, was performed using this new device. Twenty 8-month-old male IGERs were monitored continuously for 72 h. Abnormal behaviors were detected by use of a vibration recorder, and epileptic seizures were confirmed by videotape recordings taken synchronously with vibration recording. Representative forms of seizures were generalized convulsions and circling seizures. Generalized convulsions were found in 13 rats, and circling seizures in 7 of 20 animals. Two rats had generalized and circling seizures, and two rats did not have seizures. Although there was no apparent circadian rhythm to the generalized seizures, circling seizures occurred mostly between 1800 and 0800 h. A correlation between the sleep-wake cycle and the occurrence of circling seizures seems likely. Without exception, all the seizure actions were recorded by the vibration recorder and the videotape recorder. To eliminate the risk of a false-negative result, investigators scrutinized the information obtained from the vibration sensor and the videotape recorder. The newly developed seizure-monitoring system was found to facilitate detailed analysis of epileptic seizures in rats.

  18. A high-performance polycarbonate electrophoresis microchip with integrated three-electrode system for end-channel amperometric detection.

    PubMed

    Wang, Yurong; Chen, Hengwu; He, Qiaohong; Soper, Steven A

    2008-05-01

    A fully integrated polycarbonate (PC) microchip for CE with end-channel electrochemical detection operated in an amperometric mode (CE-ED) has been developed. The on-chip integrated three-electrode system consisted of a gold working electrode, an Ag/AgCl reference electrode and a platinum counter electrode, which was fabricated by photo-directed electroless plating combined with electroplating. The working electrode was positioned against the separation channel exit to reduce post-channel band broadening. The electrophoresis high-voltage (HV) interference with the amperometric detection was assessed with respect to detection noise and potential shifts at various working-to-reference electrode spacing. It was observed that the electrophoresis HV interference caused by positioning the working electrode against the channel exit could be diminished by using an on-chip integrated reference electrode that was positioned in close proximity (100 microm) to the working electrode. The CE-ED microchip was demonstrated for the separation of model analytes, including dopamine (DA) and catechol (CA). Detection limits of 132 and 164 nM were achieved for DA and CA, respectively, and a theoretical plate number of 2.5x10(4)/m was obtained for DA. Relative standard deviations in peak heights observed for five runs of a standard solution containing the two analytes (0.1 mM for each) were 1.2 and 3.1% for DA and CA, respectively. The chip could be continuously used for more than 8 h without significant deterioration in analytical performance.

  19. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF). Volume 1, Final report

    SciTech Connect

    1996-02-01

    A major objective of the coal-fired high performance power systems (HIPPS) program is to achieve significant increases in the thermodynamic efficiency of coal use for electric power generation. Through increased efficiency, all airborne emissions can be decreased, including emissions of carbon dioxide. High Performance power systems as defined for this program are coal-fired, high efficiency systems where the combustion products from coal do not contact the gas turbine. Typically, this type of a system will involve some indirect heating of gas turbine inlet air and then topping combustion with a cleaner fuel. The topping combustion fuel can be natural gas or another relatively clean fuel. Fuel gas derived from coal is an acceptable fuel for the topping combustion. The ultimate goal for HIPPS is to, have a system that has 95 percent of its heat input from coal. Interim systems that have at least 65 percent heat input from coal are acceptable, but these systems are required to have a clear development path to a system that is 95 percent coal-fired. A three phase program has been planned for the development of HIPPS. Phase 1, reported herein, includes the development of a conceptual design for a commercial plant. Technical and economic feasibility have been analysed for this plant. Preliminary R&D on some aspects of the system were also done in Phase 1, and a Research, Development and Test plan was developed for Phase 2. Work in Phase 2 include s the testing and analysis that is required to develop the technology base for a prototype plant. This work includes pilot plant testing at a scale of around 50 MMBtu/hr heat input. The culmination of the Phase 2 effort will be a site-specific design and test plan for a prototype plant. Phase 3 is the construction and testing of this plant.

  20. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  1. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  2. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  3. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  4. Simultaneous determination of nicotine and cotinine in serum using high-performance liquid chromatography with fluorometric detection and postcolumn UV-photoirradiation system.

    PubMed

    Yasuda, Makoto; Ota, Tatsuhiro; Morikawa, Atsushi; Mawatari, Ken-ichi; Fukuuchi, Tomoko; Yamaoka, Noriko; Kaneko, Kiyoko; Nakagomi, Kazuya

    2013-09-01

    A simple and rapid method for the simultaneous determination of serum nicotine and cotinine using high-performance liquid chromatography (HPLC)-fluorometric detection with a postcolumn ultraviolet-photoirradiation system was developed. Analytes were extracted from alkalinized human serum via liquid-liquid extraction using chloroform. The organic phase was back-extracted with the acidified aqueous phase, and the analytes were directly injected into an ion-pair reversed-phase HPLC system. 6-Aminoquinoline was used as an internal standard. Nicotine, cotinine, and 6-aminoquinoline were separated within 14min. The extraction efficiency of nicotine and cotinine was greater than 91%. The linear range was 0.30-1000ng for nicotine and 0.06-1000ng for cotinine. In serum samples from smokers, the concentrations of nicotine and cotinine were 8-15ng/mL and 156-372ng/mL, respectively.

  5. Two-dimensional high-performance thin-layer chromatography of tryptic bovine albumin digest using normal- and reverse-phase systems with silanized silica stationary phase.

    PubMed

    Gwarda, Radosław Łukasz; Dzido, Tadeusz Henryk

    2013-10-18

    Among many advantages of planar techniques, two-dimensional (2D) separation seems to be the most important for analysis of complex samples. Here we present quick, simple and efficient two-dimensional high-performance thin-layer chromatography (2D HPTLC) of bovine albumin digest using commercial HPTLC RP-18W plates (silica based stationary phase with chemically bonded octadecyl ligands of coverage density 0.5μmol/m(2) from Merck, Darmstadt). We show, that at low or high concentration of water in the mobile phase comprised methanol and some additives the chromatographic systems with the plates mentioned demonstrate normal- or reversed-phase liquid chromatography properties, respectively, for separation of peptides obtained. These two systems show quite different separation selectivity and their combination into 2D HPTLC process provides excellent separation of peptides of the bovine albumin digest. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Integration of tools for the design and assessment of high-performance, highly reliable computing systems (DAHPHRS). Final report, Jun 89-Sep 90

    SciTech Connect

    Scheper, C.O.; Baker, R.L.; Waters, H.L.

    1991-12-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the system engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report will describe an investigation which examined methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercube, the Encore Multimac, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  7. Use of Microdialysis-Based Continuous Glucose Monitoring to Drive Real-Time Semi-Closed-Loop Insulin Infusion

    PubMed Central

    Freckmann, Guido; Jendrike, Nina; Buck, Harvey; Bousamra, Steven; Galley, Paul; Thukral, Ajay; Wagner, Robin; Weinert, Stefan; Haug, Cornelia

    2014-01-01

    Continuous glucose monitoring (CGM) and automated insulin delivery may make diabetes management substantially easier, if the quality of the resulting therapy remains adequate. In this study, a semi-closed-loop control algorithm was used to drive insulin therapy and its quality was compared to that of subject-directed therapy. Twelve subjects stayed at the study site for approximately 70 hours and were provided with the investigational Automated Pancreas System Test Stand (APS-TS), which was used to calculate insulin dosage recommendations automatically. These recommendations were based on microdialysis CGM values and common diabetes therapy parameters. For the first half of their stay, the subjects directed their diabetes therapy themselves, whereas for the second half, the insulin recommendations were delivered by the APS-TS (so-called algorithm-driven therapy). During subject-directed therapy, the mean glucose was 114 mg/dl compared to 125 mg/dl during algorithm-driven therapy. Time in target (90 to 150 mg/dl) was approximately 46% during subject-directed therapy and approximately 58% during algorithm-driven therapy. When subjects directed their therapy, approximately 2 times more hypoglycemia interventions (oral administration of carbohydrates) were required than during algorithm-driven therapy. No hyperglycemia interventions (delivery of addition insulin) were necessary during subject-directed therapy, while during algorithm-driven therapy, 2 hyperglycemia interventions were necessary. The APS-TS was able to adequately control glucose concentrations in the subjects. Time in target was at least comparable or moderately higher during closed-loop control and markedly fewer hypoglycemia interventions were required, thus increasing patient safety. PMID:25205589

  8. Development of a temperature-compensated hot-film anemometer system for boundary-layer transition detection on high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Chiles, H. R.; Johnson, J. B.

    1985-01-01

    A hot-film constant-temperature anemometer (CTA) system was flight-tested and evaluated as a candidate sensor for determining boundary-layer transition on high-performance aircraft. The hot-film gage withstood an extreme flow environment characterized by shock waves and high dynamic pressures, although sensitivity to the local total temperature with the CTA indicated the need for some form of temperature compensation. A temperature-compensation scheme was developed and two CTAs were modified and flight-tested on the F-104/Flight Test Fixture (FTF) facility at a variety of Mach numbers and altitudes, ranging from 0.4 to 1.8 and 5,000 to 40,000 ft respectively.

  9. Developing collective customer knowledge and service climate: The interaction between service-oriented high-performance work systems and service leadership.

    PubMed

    Jiang, Kaifeng; Chuang, Chih-Hsun; Chiao, Yu-Ching

    2015-07-01

    This study theorized and examined the influence of the interaction between Service-Oriented high-performance work systems (HPWSs) and service leadership on collective customer knowledge and service climate. Using a sample of 569 employees and 142 managers in footwear retail stores, we found that Service-Oriented HPWSs and service leadership reduced the influences of one another on collective customer knowledge and service climate, such that the positive influence of service leadership on collective customer knowledge and service climate was stronger when Service-Oriented HPWSs were lower than when they were higher or the positive influence of Service-Oriented HPWSs on collective customer knowledge and service climate was stronger when service leadership was lower than when it was higher. We further proposed and found that collective customer knowledge and service climate were positively related to objective financial outcomes through service performance. Implications for the literature and managerial practices are discussed. (c) 2015 APA, all rights reserved).

  10. Determination of Sunset Yellow and Tartrazine in Food Samples by Combining Ionic Liquid-Based Aqueous Two-Phase System with High Performance Liquid Chromatography

    PubMed Central

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01–50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method. PMID:25538857

  11. Determination of sunset yellow and tartrazine in food samples by combining ionic liquid-based aqueous two-phase system with high performance liquid chromatography.

    PubMed

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01-50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method.

  12. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  13. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  14. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  15. High performance collectors

    NASA Astrophysics Data System (ADS)

    Ogawa, H.; Hozumi, S.; Mitsumata, T.; Yoshino, K.; Aso, S.; Ebisu, K.

    1983-04-01

    Materials and structures used for flat plate solar collectors and evacuated tubular collectors were examined relative to their overall performance to project effectiveness for building heating and cooling and the feasibility of use for generating industrial process heat. Thermal efficiencies were calculated for black paint single glazed, selective surface single glazed, and selective surface double glazed flat plate collectors. The efficiencies of a single tube and central tube accompanied by two side tube collectors were also studied. Techniques for extending the lifetimes of the collectors were defined. The selective surface collectors proved to have a performance superior to other collectors in terms of the average annual energy delivered. Addition of a black chrome-coated fin system to the evacuated collectors produced significant collection efficiency increases.

  16. Improving optical transmission and image contrast in medium and high performance optical systems using weighted average angle of incidence techniques to optimize coatings

    NASA Astrophysics Data System (ADS)

    Harder, James A.; Sprague, Michaelene

    2008-10-01

    Designers of medium and high performance optical systems often overlook a very simple technique that can improve the system transmission and image contrast, as well as reduce scattering within the system. The resulting improvement in the optical collection efficiency can be used to increase performance or be traded off to realize improvements in other areas (i.e. aperture size, weight, etc.). The technique is based on the observation that many (if not most) anti-reflection coatings specified for lens surfaces, are specified at a normal angle of incidence. Since most of the energy incident on a typical lens impinges at angles other than the normal, the efficiency of an anti-reflection coating at any surface might be improved by using an approach based on weighted average angles of the incident radiation. This paper describes one approach to calculate weighted average coating angles for a optical systems. The optical transmissions are estimated, when the respective coatings are specified at the normal angle of incidence and at an angle based on the incident ray geometry. The measured transmission of two (otherwise identical) aspheric lenses, one coated using a standard SLAR coating specified at a normal incidence angle and the other coated using a standard SLAR coating specified at optimized incidence angles are presented.

  17. Development of a high-performance, coal-fired power generating system with a pyrolysis gas and char-fired high-temperature furnace

    SciTech Connect

    Shenker, J.

    1995-11-01

    A high-performance power system (HIPPS) is being developed. This system is a coal-fired, combined-cycle plant that will have an efficiency of at least 47 percent, based on the higher heating value of the fuel. The original emissions goal of the project was for NOx and SOx to each be below 0.15 lb/MMBtu. In the Phase 2 RFP this emissions goal was reduced to 0.06 lb/MMBtu. The ultimate goal of HIPPS is to have an all-coal-fueled system, but initial versions of the system are allowed up to 35 percent heat input from natural gas. Foster Wheeler Development Corporation is currently leading a team effort with AlliedSignal, Bechtel, Foster Wheeler Energy Corporation, Research-Cottrell, TRW and Westinghouse. Previous work on the project was also done by General Electric. The HIPPS plant will use a high-Temperature Advanced Furnace (HITAF) to achieve combined-cycle operation with coal as the primary fuel. The HITAF is an atmospheric-pressure, pulverized-fuel-fired boiler/air heater. The HITAF is used to heat air for the gas turbine and also to transfer heat to the steam cycle. its design and functions are very similar to conventional PC boilers. Some important differences, however, arise from the requirements of the combined cycle operation.

  18. High Performance Medical Classifiers

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Bekakos, M. P.

    2009-08-01

    In this paper, parallelism methodologies for the mapping of machine learning algorithms derived rules on both software and hardware are investigated. Feeding the input of these algorithms with patient diseases data, medical diagnostic decision trees and their corresponding rules are outputted. These rules can be mapped on multithreaded object oriented programs and hardware chips. The programs can simulate the working of the chips and can exhibit the inherent parallelism of the chips design. The circuit of a chip can consist of many blocks, which are operating concurrently for various parts of the whole circuit. Threads and inter-thread communication can be used to simulate the blocks of the chips and the combination of block output signals. The chips and the corresponding parallel programs constitute medical classifiers, which can classify new patient instances. Measures taken from the patients can be fed both into chips and parallel programs and can be recognized according to the classification rules incorporated in the chips and the programs design. The chips and the programs constitute medical decision support systems and can be incorporated into portable micro devices, assisting physicians in their everyday diagnostic practice.

  19. Selective extraction and determination of vitamin B12 in urine by ionic liquid-based aqueous two-phase system prior to high-performance liquid chromatography.

    PubMed

    Berton, Paula; Monasterio, Romina P; Wuilloud, Rodolfo G

    2012-08-15

    A rapid and simple extraction technique based on aqueous two-phase system (ATPS) was developed for separation and enrichment of vitamin B(12) in urine samples. The proposed ATPS-based method involves the application of the hydrophilic ionic liquid (IL) 1-hexyl-3-methylimidazolium chloride and K(2)HPO(4). After the extraction procedure, the vitamin B(12)-enriched IL upper phase was directly injected into the high performance liquid chromatography (HPLC) system for analysis. All variables influencing the IL-based ATPS approach (e.g., the composition of ATPS, pH and temperature values) were evaluated. The average extraction efficiency was 97% under optimum conditions. Only 5.0 mL of sample and a single hydrolysis/deproteinization/extraction step were required, followed by direct injection of the IL-rich upper phase into HPLC system for vitamin B(12) determination. A detection limit of 0.09 μg mL(-1), a relative standard deviation (RSD) of 4.50% (n=10) and a linear range of 0.40-8.00 μg mL(-1) were obtained. The proposed green analytical procedure was satisfactorily applied to the analysis of samples with highly complex matrices, such as urine. Finally, the IL-ATPS technique could be considered as an efficient tool for the water-soluble vitamin B(12) extraction. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology

    PubMed Central

    Foran, David J; Yang, Lin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    Objective and design The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. Results The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples. PMID:21606133

  1. Optimization and Assessment of Three Different High Performance Liquid Chromatographic Systems for the Combinative Fingerprint Analysis and Multi-Ingredients Quantification of Sangju Ganmao Tablet.

    PubMed

    Guo, Meng-Zhe; Han, Jie; He, Dan-Dan; Zou, Jia-Hui; Li, Zheng; Du, Yan; Tang, Dao-Quan

    2017-03-01

    Chromatographic separation is still a critical subject for the quality control of traditional Chinese medicine. In this study, three different high performance liquid chromatographic (HPLC) systems employing commercially available columns packed with 1.8, 3.5 and 5.0 μm particles were respectively developed and optimized for the combinative fingerprint analysis and multi-ingredients quantification of Sangju Ganmao tablet (SGT). Chromatographic parameters including the repeatability of retention time and peak area, symmetry factor, resolution, number of theoretical plates and peak capacity were used to assess the chromatographic performance of different HPLC systems. The optimal chromatographic system using Agilent ZORBAX SB-C18 column (2.1 mm × 100 mm, 3.5 μm) as stationary phase was respectively coupled with diode array detector or mass spectrometry detector for the chromatographic fingerprint analysis and simultaneous quantification or identification of nine compounds of SGT. All the validation data conformed to the acceptable requirements. For the fingerprint analysis, 31 peaks were selected as the common peaks to evaluate the similarities of SGT from 10 different manufacturers using heatmap, hierarchical cluster analysis and principal component analysis. The results demonstrated that the combinations of the quantitative and chromatographic fingerprint analysis offer an efficient way to evaluate the quality consistency of SGT.

  2. Point and trend accuracy of a continuous intravenous microdialysis-based glucose-monitoring device in critically ill patients: a prospective study.

    PubMed

    Leopold, J H; van Hooijdonk, R T M; Boshuizen, M; Winters, T; Bos, L D; Abu-Hanna, A; Hoek, A M T; Fischer, J C; van Dongen-Lases, E C; Schultz, M J

    2016-12-01

    Microdialysis is a well-established technology that can be used for continuous blood glucose monitoring. We determined point and trend accuracy, and reliability of a microdialysis-based continuous blood glucose-monitoring device (EIRUS(®)) in critically ill patients. Prospective study involving patients with an expected intensive care unit stay of ≥48 h. Every 15 min, device readings were compared with blood glucose values measured in arterial blood during blocks of 8 h per day for a maximum of 3 days. The Clarke error grid, Bland-Altman plot, mean absolute relative difference and glucose prediction error analysis were used to express point accuracy and the rate error grid to express trend accuracy. Reliability testing included aspects of the device and the external sensor, and the special central venous catheter (CVC) with a semipermeable membrane for use with this device. We collected 594 paired values in 12 patients (65 [26-80; 8-97] (median [IQR; total range]) paired values per patient). Point accuracy: 93.6 % of paired values were in zone A of the Clarke error grid, 6.4 % were in zone B; bias was 4.1 mg/dL with an upper limit of agreement of 28.6 mg/dL and a lower level of agreement of -20.5 mg/dL in the Bland-Altman analysis; 93.6 % of the values ≥75 mg/dL were within 20 % of the reference values in the glucose prediction error analysis; the mean absolute relative difference was 7.5 %. Trend accuracy: 96.4 % of the paired values were in zone A, and 3.3 and 0.3 % were in zone B and zone C of the rate error grid. Reliability: out of 16 sensors, 4 had to be replaced prematurely; out of 12 CVCs, two malfunctioned (one after unintentional flushing by unsupervised nurses of the ports connected to the internal microdialysis chamber, causing rupture of the semipermeable membrane; one for an unknown reason). Device start-up time was 58 [56-67] min; availability of real-time data was 100 % of the connection time. In this study in critically ill

  3. Transforming Regions into High-Performing Health Systems Toward the Triple Aim of Better Health, Better Care and Better Value for Canadians.

    PubMed

    Bergevin, Yves; Habib, Bettina; Elicksen-Jensen, Keesa; Samis, Stephen; Rochon, Jean; Denis, Jean-Louis; Roy, Denis

    2016-01-01

    A study on the impact of regionalization on the Triple Aim of Better Health, Better Care and Better Value across Canada in 2015 identified major findings including: (a) with regard to the Triple Aim, the Canadian situation is better than before but variable and partial, and Canada continues to underperform compared with other industrialized countries, especially in primary healthcare where it matters most; (b) provinces are converging toward a two-level health system (provincial/regional); (c) optimal size of regions is probably around 350,000-500,000 population; d) citizen and physician engagement remains weak. A realistic and attainable vision for high-performing regional health systems is presented together with a way forward, including seven areas for improvement: 1. Manage the integrated regionalized health systems as results-driven health programs; 2. Strengthen wellness promotion, public health and intersectoral action for health; 3. Ensure timely access to personalized primary healthcare/family health and to proximity services; 4. Involve physicians in clinical governance and leadership, and partner with them in accountability for results including the required changes in physician remuneration; 5. Engage citizens in shaping their own health destiny and their health system; 6. Strengthen health information systems, accelerate the deployment of electronic health records and ensure their interoperability with health information systems; 7. Foster a culture of excellence and continuous quality improvement. We propose a turning point for Canada, from Paradigm Freeze to Paradigm Shift: from hospital-centric episodic care toward evidence-informed population-based primary and community care with modern family health teams, ensuring integrated and coordinated care along the continuum, especially for high users. We suggest goals and targets for 2020 and time-bound federal/provincial/regional working groups toward reaching the identified goals and targets and placing

  4. FPGA Based High Performance Computing

    SciTech Connect

    Bennett, Dave; Mason, Jeff; Sundararajan, Prasanna; Dellinger, Erik; Putnam, Andrew; Storaasli, Olaf O

    2008-01-01

    Current high performance computing (HPC) applications are found in many consumer, industrial and research fields. From web searches to auto crash simulations to weather predictions, these applications require large amounts of power by the compute farms and supercomputers required to run them. The demand for more and faster computation continues to increase along with an even sharper increase in the cost of the power required to operate and cool these installations. The ability of standard processor based systems to address these needs has declined in both speed of computation and in power consumption over the past few years. This paper presents a new method of computation based upon programmable logic as represented by Field Programmable Gate Arrays (FPGAs) that addresses these needs in a manner requiring only minimal changes to the current software design environment.

  5. Developing high-performance leaders.

    PubMed

    Melum, Mara

    2002-01-01

    Although there is widespread recognition that strong leadership is key in these challenging times, many companies provide only the tip of the iceberg of leadership development support. This article is a resource for high-powered leadership development systems that will have an impact on performance. Four topics are discussed: (1) models, (2) investment and results, (3) critical success factors, and (4) case studies of how the 3M Company and HealthPartners develop high-performance leaders. Studies that quantity the effect of leadership development on performance are noted. Five critical success factors are described, and examples from leadership development benchmark organizations such as General Electric and Reell Precision Manufacturing are discussed.

  6. High Performance Flexible Thermal Link

    NASA Astrophysics Data System (ADS)

    Sauer, Arne; Preller, Fabian

    2014-06-01

    The paper deals with the design and performance verification of a high performance and flexible carbon fibre thermal link.Project goal was to design a space qualified thermal link combining low mass, flexibility and high thermal conductivity with new approaches regarding selected materials and processes. The idea was to combine the advantages of existing metallic links regarding flexibility and the thermal performance of high conductive carbon pitch fibres. Special focus is laid on the thermal performance improvement of matrix systems by means of nano-scaled carbon materials in order to improve the thermal performance also perpendicular to the direction of the unidirectional fibres.One of the main challenges was to establish a manufacturing process which allows handling the stiff and brittle fibres, applying the matrix and performing the implementation into an interface component using unconventional process steps like thermal bonding of fibres after metallisation.This research was funded by the German Federal Ministry for Economic Affairs and Energy (BMWi).

  7. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  8. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  9. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis.

    PubMed

    Lee, Ki-Sun; Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan; Lee, Jeong-Yol

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems.

  10. Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung

    2012-03-01

    Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare.

  11. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis

    PubMed Central

    Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems. PMID:28386547

  12. Determination of histamine in wines with an on-line pre-column flow derivatization system coupled to high performance liquid chromatography.

    PubMed

    García-Villar, Natividad; Saurina, Javier; Hernández-Cassou, Santiago

    2005-09-01

    A new rapid and sensitive high performance liquid chromatography (HPLC) method for determining histamine in red wine samples, based on continuous flow derivatization with 1,2-naphthoquinone-4-sulfonate (NQS), is proposed. In this system, samples are derivatized on-line in a three-channel flow manifold for reagent, buffer and sample. The reaction takes place in a PTFE coil heated at 80 degrees C and with a residence time of 2.9 min. The reaction mixture is injected directly into the chromatographic system, where the histamine derivative is separated from other aminated compounds present in the wine matrix in less than ten minutes. The HPLC procedure involves a C18 column, a binary gradient of 2% acetic acid-methanol as a mobile phase, and UV detection at 305 nm. Analytical parameters of the method are evaluated using red wine samples. The linear range is up to 66.7 mg L(-1) (r = 0.9999), the precision (RSD) is 3%, the detection limit is 0.22 mg L(-1), and the average histamine recovery is 101.5% +/- 6.7%. Commercial red wines from different Spanish regions are analyzed with the proposed method.

  13. High-performance low-noise 128-channel readout-integrated circuit for flat-panel x-ray detector systems

    NASA Astrophysics Data System (ADS)

    Beuville, Eric J.; Belding, Mark; Costello, Adrienne N.; Hansen, Randy; Petronio, Susan M.

    2004-05-01

    A silicon mixed-signal integrated circuit is needed to extract and process x-ray induced signals from a coated flat panel thin film transistor array (TFT) in order to generate a digital x-ray image. Indigo Systems Corporation has designed, fabricated, and tested such a readout integrated circuit (ROIC), the ISC9717. This off-the-shelf, high performance, low-noise, 128-channel device is fully programmable with a multistage pipelined architecture and a 9 to 14-bit programmable A/D converter per channel, making it suitable for numerous X-ray medical imaging applications. These include high-resolution radiography in single frame mode and fluoroscopy where high frame rates are required. The ISC9717 can be used with various flat panel arrays and solid-state detectors materials: Selenium (Se), Cesium Iodide (CsI), Silicon (Si), Amorphous Silicon, Gallium Arsenide (GaAs), and Cadmium Zinc Telluride (CdZnTe). The 80-micron pitch ROIC is designed to interface (wire bonding or flip-chip) along one or two sides of the x-ray panel, where ROICs are abutted vertically, each reading out charge from pixels multiplexed onto 128 horizontal read lines. The paper will present the design and test results of the ROIC, including the mechanical and electrical interface to a TFT array, system performance requirements, output multiplexing of the digital signals to an off-board processor, and characterization test results from fabricated arrays.

  14. High-performance size exclusion chromatography with a multi-wavelength absorbance detector study on dissolved organic matter characterisation along a water distribution system.

    PubMed

    Huang, Huiping; Sawade, Emma; Cook, David; Chow, Christopher W K; Drikas, Mary; Jin, Bo

    2016-06-01

    This study examined the associations between dissolved organic matter (DOM) characteristics and potential nitrification occurrence in the presence of chloramine along a drinking water distribution system. High-performance size exclusion chromatography (HPSEC) coupled with a multiple wavelength detector (200-280nm) was employed to characterise DOM by molecular weight distribution, bacterial activity was analysed using flow cytometry, and a package of simple analytical tools, such as dissolved organic carbon, absorbance at 254nm, nitrate, nitrite, ammonia and total disinfectant residual were also applied and their applicability to indicate water quality changes in distribution systems were also evaluated. Results showed that multi-wavelength HPSEC analysis was useful to provide information about DOM character while changes in molecule weight profiles at wavelengths less than 230nm were also able to be related to other water quality parameters. Correct selection of the UV wavelengths can be an important factor for providing appropriate indicators associated with different DOM compositions. DOM molecular weight in the range of 0.2-0.5kDa measured at 210nm correlated positively with oxidised nitrogen concentration (r=0.99), and the concentrations of active bacterial cells in the distribution system (r=0.85). Our study also showed that the changes of DOM character and bacterial cells were significant in those sampling points that had decreases in total disinfectant residual. HPSEC-UV measured at 210nm and flow cytometry can detect the changes of low molecular weight of DOM and bacterial levels, respectively, when nitrification occurred within the chloraminated distribution system. Copyright © 2016. Published by Elsevier B.V.

  15. Monoclonal antibody heterogeneity analysis and deamidation monitoring with high-performance cation-exchange chromatofocusing using simple, two component buffer systems.

    PubMed

    Kang, Xuezhen; Kutzko, Joseph P; Hayes, Michael L; Frey, Douglas D

    2013-03-29

    The use of either a polyampholyte buffer or a simple buffer system for the high-performance cation-exchange chromatofocusing of monoclonal antibodies is demonstrated for the case where the pH gradient is produced entirely inside the column and with no external mixing of buffers. The simple buffer system used was composed of two buffering species, one which becomes adsorbed onto the column packing and one which does not adsorb, together with an adsorbed ion that does not participate in acid-base equilibrium. The method which employs the simple buffer system is capable of producing a gradual pH gradient in the neutral to acidic pH range that can be adjusted by proper selection of the starting and ending pH values for the gradient as well as the buffering species concentration, pKa, and molecular size. By using this approach, variants of representative monoclonal antibodies with isoelectric points of 7.0 or less were separated with high resolution so that the approach can serve as a complementary alternative to isoelectric focusing for characterizing a monoclonal antibody based on differences in the isoelectric points of the variants present. Because the simple buffer system used eliminates the use of polyampholytes, the method is suitable for antibody heterogeneity analysis coupled with mass spectrometry. The method can also be used at the preparative scale to collect highly purified isoelectric variants of an antibody for further study. To illustrate this, a single isoelectric point variant of a monoclonal antibody was collected and used for a stability study under forced deamidation conditions. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. High-performance intraoperative cone-beam CT on a mobile C-arm: an integrated system for guidance of head and neck surgery

    NASA Astrophysics Data System (ADS)

    Siewerdsen, J. H.; Daly, M. J.; Chan, H.; Nithiananthan, S.; Hamming, N.; Brock, K. K.; Irish, J. C.

    2009-02-01

    A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video. Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such systems in complex OR environments.

  17. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  18. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  19. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high-temperature furnace (HITAF): Volume 4. Final report

    SciTech Connect

    1996-05-01

    An outgrowth of our studies of the FWDC coal-fired high performance power systems (HIPPS) concept was the development of a concept for the repowering of existing boilers. The initial analysis of this concept indicates that it will be both technically and economically viable. A unique feature of our greenfields HIPPS concept is that it integrates the operation of a pressurized pyrolyzer and a pulverized fuel-fired boiler/air heater. Once this type of operation is achieved, there are a few different applications of this core technology. Two greenfields plant options are the base case plant and a plant where ceramic air heaters are used to extend the limit of air heating in the HITAF. The greenfields designs can be used for repowering in the conventional sense which involves replacing almost everything in the plant except the steam turbine and accessories. Another option is to keep the existing boiler and add a pyrolyzer and gas turbine to the plant. The study was done on an Eastern utility plant. The owner is currently considering replacing two units with atmospheric fluidized bed boilers, but is interested in a comparison with HIPPS technology. After repowering, the emissions levels need to be 0.25 lb SO{sub x}/MMBtu and 0.15 lb NO{sub x}/MMBtu.

  20. Engineering development of coal-fired high performance power systems, Phases 2 and 3. Quarterly progress report, October 1--December 31, 1996. Final report

    SciTech Connect

    1996-12-31

    The goals of this program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: {gt} 47% efficiency (HHV); NO{sub x}, SO{sub x}, and particulates {gt} 10% NSPS; coal providing {ge} 65% of heat input; all sold wastes benign; and cost of electricity 90% of present plant. Work reported herein is from Task 1.3 HIPPS Commercial Plant Design, Task 2,2 HITAF Air Heater, and Task 2.4 Duct Heater Design. The impact on cycle efficiency from the integration of various technology advances is presented. The criteria associated with a commercial HIPPS plant design as well as possible environmental control options are presented. The design of the HITAF air heaters, both radiative and convective, is the most critical task in the program. In this report, a summary of the effort associated with the radiative air heater designs that have been considered is provided. The primary testing of the air heater design will be carried out in the UND/EERC pilot-scale furnace; progress to date on the design and construction of the furnace is a major part of this report. The results of laboratory and bench scale activities associated with defining slag properties are presented. Correct material selection is critical for the success of the concept; the materials, both ceramic and metallic, being considered for radiant air heater are presented. The activities associated with the duct heater are also presented.