Science.gov

Sample records for high-performance microdialysis-based system

  1. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  2. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  3. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  4. High performance solar Stirling system

    NASA Technical Reports Server (NTRS)

    Stearns, J. W.; Haglund, R.

    1981-01-01

    A full-scale Dish-Stirling system experiment, at a power level of 25 kWe, has been tested during 1981 on the Test Bed Concentrator No. 2 at the Parabolic Dish Test Site, Edwards, CA. Test components, designed and developed primarily by industrial contractors for the Department of Energy, include an advanced Stirling engine driving an induction alternator, a directly-coupled solar receiver with a natural gas combustor for hybrid operation and a breadboard control system based on a programmable controller and standard utility substation components. The experiment demonstrated practicality of the solar Stirling application and high system performance into a utility grid. This paper describes the design and its functions, and the test results obtained.

  5. High Performance Work Systems and Firm Performance.

    ERIC Educational Resources Information Center

    Kling, Jeffrey

    1995-01-01

    A review of 17 studies of high-performance work systems concludes that benefits of employee involvement, skill training, and other high-performance work practices tend to be greater when new methods are adopted as part of a consistent whole. (Author)

  6. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  7. Automated microdialysis-based system for in situ microsampling and investigation of lead bioavailability in terrestrial environments under physiologically based extraction conditions.

    PubMed

    Rosende, María; Magalhães, Luis M; Segundo, Marcela A; Miró, Manuel

    2013-10-15

    In situ automatic microdialysis sampling under batch-flow conditions is herein proposed for the first time for expedient assessment of the kinetics of lead bioaccessibility/bioavailability in contaminated and agricultural soils exploiting the harmonized physiologically based extraction test (UBM). Capitalized upon a concentric microdialysis probe immersed in synthetic gut fluids, the miniaturized flow system is harnessed for continuous monitoring of lead transfer across the permselective microdialysis membrane to mimic the diffusive transport of metal species through the epithelium of the stomach and of the small intestine. Besides, the addition of the UBM gastrointestinal fluid surrogates at a specified time frame is fully mechanized. Distinct microdialysis probe configurations and membranes types were investigated in detail to ensure passive sampling under steady-state dialytic conditions for lead. Using a 3-cm-long polysulfone membrane with averaged molecular weight cutoff of 30 kDa in a concentric probe and a perfusate flow rate of 2.0 μL min(-1), microdialysis relative recoveries in the gastric phase were close to 100%, thereby omitting the need for probe calibration. The automatic leaching method was validated in terms of bias in the analysis of four soils with different physicochemical properties and containing a wide range of lead content (16 ± 3 to 1216 ± 42 mg kg(-1)) using mass balance assessment as a quality control tool. No significant differences between the mass balance and the total lead concentration in the suite of analyzed soils were encountered (α = 0.05). Our finding that the extraction of soil-borne lead for merely one hour in the GI phase suffices for assessment of the bioavailable fraction as a result of the fast immobilization of lead species at near-neutral conditions would assist in providing risk assessment data from the UBM test on a short notice. PMID:24016003

  8. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  9. Management issues for high performance storage systems

    SciTech Connect

    Louis, S.; Burris, R.

    1995-03-01

    Managing distributed high-performance storage systems is complex and, although sharing common ground with traditional network and systems management, presents unique storage-related issues. Integration technologies and frameworks exist to help manage distributed network and system environments. Industry-driven consortia provide open forums where vendors and users cooperate to leverage solutions. But these new approaches to open management fall short addressing the needs of scalable, distributed storage. We discuss the motivation and requirements for storage system management (SSM) capabilities and describe how SSM manages distributed servers and storage resource objects in the High Performance Storage System (HPSS), a new storage facility for data-intensive applications and large-scale computing. Modem storage systems, such as HPSS, require many SSM capabilities, including server and resource configuration control, performance monitoring, quality of service, flexible policies, file migration, file repacking, accounting, and quotas. We present results of initial HPSS SSM development including design decisions and implementation trade-offs. We conclude with plans for follow-on work and provide storage-related recommendations for vendors and standards groups seeking enterprise-wide management solutions.

  10. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  11. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  12. Advanced solidification system using high performance cement

    SciTech Connect

    Kikuchi, Makoto; Matsuda, Masami; Nishi, Takashi; Tsuchiya, Hiroyuki; Izumida, Tatsuo

    1995-12-31

    Advanced cement solidification is proposed for the solidification of radioactive waste such as spent ion exchange resin, incineration ash and liquid waste. A new, high performance cement has been developed to raise volume reduction efficiency and lower radioactivity release into the environment. It consists of slag cement, reinforcing fiber, natural zeolite and lithium nitrate (LiNO{sub 3}). The fiber allows waste loading to be increased from 20 to 55kg-dry resin/200L. The zeolite, whose main constituent is clinoptilolite, reduces cesium leachability from the waste form to about 1/10. Lithium nitrate prevents alkaline corrosion of the aluminum, contained in ash, and reduces hydrogen gas generation. Laboratory and full-scale pilot plant experiments were performed to evaluate properties of the waste form, using simulated wastes. Emphasis was laid on improvement of solidification of spent resin and ash.

  13. A systems approach to high performance oscillators

    NASA Technical Reports Server (NTRS)

    Stein, S. R.; Manney, C. M., Jr.; Walls, F. L.; Gray, J. E.; Besson, R. J.

    1978-01-01

    The purpose of this paper is to show how systems composed of multiple oscillators and resonators can achieve superior performance compared to a single oscillator. Experimental results are presented for two systems based on quartz crystals which provide state-of-the-art stability over a much wider range of averaging times than has been previously achieved. One system has achieved a factor of five improvement in noise floor compared to all previously reported results.

  14. High performance VLSI telemetry data systems

    NASA Technical Reports Server (NTRS)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  15. High-performance computing and distributed systems

    SciTech Connect

    Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into ``virtual systems`` that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  16. High-performance computing and distributed systems

    SciTech Connect

    Loken, S.C.; Greiman, W.; Jacobson, V.L.; Johnston, W.E.; Robertson, D.W.; Tierney, B.L.

    1992-09-01

    We present a scenario for a fully distributed computing environment in which computing, storage, and I/O elements are configured on demand into virtual systems'' that are optimal for the solution of a particular problem. We also describe present two pilot projects that illustrate some of the elements and issues of this scenario. The goal of this work is to make the most powerful computing systems those that are logically assembled from network based components, and to make those systems available independent of the geographic location of the constituent elements.

  17. High-Performance Energy Applications and Systems

    SciTech Connect

    Miller, Barton

    2014-05-19

    The Paradyn project has a history of developing algorithms, techniques, and software that push the cutting edge of tool technology for high-end computing systems. Under this funding, we are working on a three-year agenda to make substantial new advances in support of new and emerging Petascale systems. The overall goal for this work is to address the steady increase in complexity of these petascale systems. Our work covers two key areas: (1) The analysis, instrumentation and control of binary programs. Work in this area falls under the general framework of the Dyninst API tool kits. (2) Infrastructure for building tools and applications at extreme scale. Work in this area falls under the general framework of the MRNet scalability framework. Note that work done under this funding is closely related to work done under a contemporaneous grant, “Foundational Tools for Petascale Computing”, SC0003922/FG02-10ER25940, UW PRJ27NU.

  18. The high performance storage system (HPSS)

    SciTech Connect

    Kliewer, K.L.

    1995-12-31

    Ever more powerful computers and rapidly enlarging data sets require unprecedented levels of data storage and access capabilities. To help meet these requirements, the scalable, network-centered, parallel storage system HPSS was designed and is now being developed. The parallel 1/0 architecture, mechanisms, strategies and capabilities are described. The current development status and the broad applicability are illustrated through a discussion of the sites at which HPSS is now being implemented, representing a spectrum of computing environments. Planned capabilities and time scales will be provided. Some of the remarkable developments in storage media data density looming on the horizon will also be noted.

  19. Technologies of high-performance thermography systems

    NASA Astrophysics Data System (ADS)

    Breiter, R.; Cabanski, Wolfgang A.; Mauk, K. H.; Kock, R.; Rode, W.

    1997-08-01

    A family of 2 dimensional detection modules based on 256 by 256 and 486 by 640 platinum silicide (PtSi) focal planes, or 128 by 128 and 256 by 256 mercury cadmium telluride (MCT) focal planes for applications in either the 3 - 5 micrometer (MWIR) or 8 - 10 micrometer (LWIR) range was recently developed by AIM. A wide variety of applications is covered by the specific features unique for these two material systems. The PtSi units provide state of the art correctability with long term stable gain and offset coefficients. The MCT units provide extremely fast frame rates like 400 Hz with snapshot integration times as short as 250 microseconds and with a thermal resolution NETD less than 20 mK for e.g. the 128 by 128 LWIR module. The unique design idea general for all of these modules is the exclusively digital interface, using 14 bit analog to digital conversion to provide state of the art correctability, access to highly dynamic scenes without any loss of information and simplified exchangeability of the units. Device specific features like bias voltages etc. are identified during the final test and stored in a memory on the driving electronics. This concept allows an easy exchange of IDCAs of the same type without any need for tuning or e.g. the possibility to upgrade a PtSi based unit to an MCT module by just loading the suitable software. Miniaturized digital signal processor (DSP) based image correction units were developed for testing and operating the units with output data rates of up to 16 Mpixels/s. These boards provide the ability for freely programmable realtime functions like two point correction and various data manipulations in thermography applications.

  20. Building Synergy: The Power of High Performance Work Systems.

    ERIC Educational Resources Information Center

    Gephart, Martha A.; Van Buren, Mark E.

    1996-01-01

    Suggests that high-performance work systems create the synergy that lets companies gain and keep a competitive advantage. Identifies the components of high-performance work systems and critical action steps for implementation. Describes the results companies such as Xerox, Lever Brothers, and Corning Incorporated have achieved by using them. (JOW)

  1. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  2. Teacher and Leader Effectiveness in High-Performing Education Systems

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda, Ed.; Rothman, Robert, Ed.

    2011-01-01

    The issue of teacher effectiveness has risen rapidly to the top of the education policy agenda, and the federal government and states are considering bold steps to improve teacher and leader effectiveness. One place to look for ideas is the experiences of high-performing education systems around the world. Finland, Ontario, and Singapore all have…

  3. Class of service in the high performance storage system

    SciTech Connect

    Louis, S.; Teaff, D.

    1995-01-10

    Quality of service capabilities are commonly deployed in archival mass storage systems as one or more client-specified parameters to influence physical location of data in multi-level device hierarchies for performance or cost reasons. The capabilities of new high-performance storage architectures and the needs of data-intensive applications require better quality of service models for modern storage systems. HPSS, a new distributed, high-performance, scalable, storage system, uses a Class of Service (COS) structure to influence system behavior. The authors summarize the design objectives and functionality of HPSS and describes how COS defines a set of performance, media, and residency attributes assigned to storage objects managed by HPSS servers. COS definitions are used to provide appropriate behavior and service levels as requested (or demanded) by storage system clients. They compare the HPSS COS approach with other quality of service concepts and discuss alignment possibilities.

  4. Los Alamos National Laboratory's high-performance data system

    SciTech Connect

    Mercier, C.; Chorn, G.; Christman, R.; Collins, B.

    1991-01-01

    Los Alamos National Laboratory is designing a High-Performance Data System (HPDS) that will provide storage for supercomputers requiring large files and fast transfer speeds. The HPDS will meet the performance requirements by managing data transfers from high-speed storage systems connected directly to a high-speed network. File and storage management software will be distributed in workstations. Network protocols will ensure reliable, wide-area network data delivery to support long-distance distributed processing. 3 refs., 2 figs.

  5. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  6. Scyld Beowulf: A Standard, High-Performance Cluster Operating System

    NASA Astrophysics Data System (ADS)

    Becker, Donald

    2001-06-01

    Beowulf systems are high performance computers constructed from commodity hardware connected by a private internal network. Scyld Beowulf is a new generation Beowulf cluster operating system that presents this collection of machines as a single system. New features such as a unified process space, node scheduler, and integrated libraries reduce the complexity of building and using cluster applications. This talk will describe how the Scyld Beowulf system works, how we use it to simplify installation, administration and running applications, and the architectural model and interface it provides to application developers and end users.

  7. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  8. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  9. Middleware in Modern High Performance Computing System Architectures

    SciTech Connect

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2007-01-01

    A recent trend in modern high performance computing (HPC) system architectures employs ''lean'' compute nodes running a lightweight operating system (OS). Certain parts of the OS as well as other system software services are moved to service nodes in order to increase performance and scalability. This paper examines the impact of this HPC system architecture trend on HPC ''middleware'' software solutions, which traditionally equip HPC systems with advanced features, such as parallel and distributed programming models, appropriate system resource management mechanisms, remote application steering and user interaction techniques. Since the approach of keeping the compute node software stack small and simple is orthogonal to the middleware concept of adding missing OS features between OS and application, the role and architecture of middleware in modern HPC systems needs to be revisited. The result is a paradigm shift in HPC middleware design, where single middleware services are moved to service nodes, while runtime environments (RTEs) continue to reside on compute nodes.

  10. Fiber optic distribution system for wideband, high performance video

    NASA Astrophysics Data System (ADS)

    Kline, A. R.

    A wideband fiber-optic video distribution system with a bandwidth exceeding 20 MHz has been developed for the NASA Space Station Freedom. The system uses FM modulation and light emitting diodes in combination with lightweight and rugged fiber-optic cables and digital switching elements to provide lightweight, reliable, high-performance video signal distribution over the full extent of the Space Station. The author addresses the Space Station requirements, including environmental constraints, which led to the selected system architecture and choice of components. The design of the modulators and demodulators, optical transmitters and receivers, fiber-optic cable, and the video switches is discussed. Also presented is a description of how the technology can be applied to those military needs which would benefit from the performance, reliability, and EMI/TEMPEST features of the system.

  11. Development of a High Performance Acousto-ultrasonic Scan System

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  12. A high-performance workflow system for subsurface simulation

    SciTech Connect

    Freedman, Vicky L.; Chen, Xingyuan; Finsterle, Stefan A.; Freshley, Mark D.; Gorton, Ian; Gosink, Luke J.; Keating, Elizabeth; Lansing, Carina; Moeglein, William AM; Murray, Christopher J.; Pau, George Shu Heng; Porter, Ellen A.; Purohit, Sumit; Rockhold, Mark L.; Schuchardt, Karen L.; Sivaramakrishnan, Chandrika; Vesselinov, Velimir V.; Waichler, Scott R.

    2014-02-14

    Subsurface modeling applications typically neglect uncertainty in the conceptual models, past or future scenarios, and attribute most or all uncertainty to errors in model parameters. In this contribution, uncertainty in technetium-99 transport in a heterogeneous, deep vadose zone is explored with respect to the conceptual model using a next generation user environment called Akuna. Akuna provides a range of tools to manage environmental modeling projects, from managing simulation data to visualizing results from high-performance computational simulators. Core toolsets accessible through the user interface include model setup, grid generation, parameter estimation, and uncertainty quantification. The BC Cribs site at Hanford in southeastern Washington State is used to demonstrate Akuna capabilities. At the BC Cribs site, conceptualization of the system is highly uncertain because only sparse information is available for the geologic conceptual model, the physical and chemical properties of the sediments, and the history of waste disposal operations. Using the Akuna toolset to perform an analysis of conservative solute transport, significant prediction uncertainty in simulated concentrations is demonstrated by conceptual model variation. This demonstrates that conceptual model uncertainty is an important consideration in sparse data environments such as BC Cribs. It is also demonstrated that Akuna and the underlying toolset provides an integrated modeling environment that streamlines model setup, parameter optimization, and uncertainty analyses for high-performance computing applications.

  13. Coal-fired high performance power generating system. Final report

    SciTech Connect

    1995-08-31

    As a result of the investigations carried out during Phase 1 of the Engineering Development of Coal-Fired High-Performance Power Generation Systems (Combustion 2000), the UTRC-led Combustion 2000 Team is recommending the development of an advanced high performance power generation system (HIPPS) whose high efficiency and minimal pollutant emissions will enable the US to use its abundant coal resources to satisfy current and future demand for electric power. The high efficiency of the power plant, which is the key to minimizing the environmental impact of coal, can only be achieved using a modern gas turbine system. Minimization of emissions can be achieved by combustor design, and advanced air pollution control devices. The commercial plant design described herein is a combined cycle using either a frame-type gas turbine or an intercooled aeroderivative with clean air as the working fluid. The air is heated by a coal-fired high temperature advanced furnace (HITAF). The best performance from the cycle is achieved by using a modern aeroderivative gas turbine, such as the intercooled FT4000. A simplified schematic is shown. In the UTRC HIPPS, the conversion efficiency for the heavy frame gas turbine version will be 47.4% (HHV) compared to the approximately 35% that is achieved in conventional coal-fired plants. This cycle is based on a gas turbine operating at turbine inlet temperatures approaching 2,500 F. Using an aeroderivative type gas turbine, efficiencies of over 49% could be realized in advanced cycle configuration (Humid Air Turbine, or HAT). Performance of these power plants is given in a table.

  14. High-performance work systems and occupational safety.

    PubMed

    Zacharatos, Anthea; Barling, Julian; Iverson, Roderick D

    2005-01-01

    Two studies were conducted investigating the relationship between high-performance work systems (HPWS) and occupational safety. In Study 1, data were obtained from company human resource and safety directors across 138 organizations. LISREL VIII results showed that an HPWS was positively related to occupational safety at the organizational level. Study 2 used data from 189 front-line employees in 2 organizations. Trust in management and perceived safety climate were found to mediate the relationship between an HPWS and safety performance measured in terms of personal-safety orientation (i.e., safety knowledge, safety motivation, safety compliance, and safety initiative) and safety incidents (i.e., injuries requiring first aid and near misses). These 2 studies provide confirmation of the important role organizational factors play in ensuring worker safety. PMID:15641891

  15. Engineering Development of Coal-Fired High Performance Power Systems

    SciTech Connect

    2000-12-31

    This report presents work carried out under contract DE-AC22-95PC95144 ''Engineering Development of Coal-Fired High Performance Systems Phase II and III.'' The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47% NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input all solid wastes benign cost of electricity {le}{le} 90% of present plants Phase I, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase I also included preliminary R&D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. Phase II, had as its initial objective the development of a complete design base for the construction and operation of a HIPPS prototype plant to be constructed in Phase III. As part of a descoping initiative, the Phase III program has been eliminated and work related to the commercial plant design has been ended. The rescoped program retained a program of engineering research and development focusing on high temperature heat exchangers, e.g. HITAF development (Task 2); a rescoped Task 6 that is pertinent to Vision 21 objectives and focuses on advanced cycle analysis and optimization, integration of gas turbines into complex cycles, and repowering designs; and preparation of the Phase II Technical Report (Task 8). This rescoped program deleted all subsystem testing (Tasks 3, 4, and 5) and the development of a site-specific engineering design and test plan for the HIPPS prototype plant (Task 7). Work reported herein is from: Task 2.2 HITAF Air Heaters

  16. Using distributed OLTP technology in a high performance storage system

    SciTech Connect

    Tyler, T.W.; Fisher, D.S.

    1995-03-01

    The design of scaleable mass storage systems requires various system components to be distributed across multiple processors. Most of these processes maintain persistent database-type information (i.e., metadata) on the resources they are responsible for managing (e.g., bitfiles, bitfile segments, physical volumes, virtual volumes, cartridges, etc.). These processes all participate in fulfilling end-user requests and updating metadata information. A number of challenges arise when distributed processes attempt to maintain separate metadata resources with production-level integrity and consistency. For example, when requests fail, metadata changes made by the various processes must be aborted or rolled back. When requests are successful, all metadata changes must be committed together. If all metadata changes cannot be committed together for some reason, then all metadata changes must be rolled back to the previous consistent state. Lack of metadata consistency jeopardizes storage system integrity. Distributed on-line transaction processing (OLTP) technology can be applied to distributed mass storage systems as the mechanism for managing the consistency of distributed metadata. OLTP concepts are familiar to manN, industries such as banking and financial services but are less well known and understood in scientific and technical computing. As mass storage systems and other products are designed using distributed processing and data-management strategies for performance, scalability, and/or availability reasons, distributed OLTP technology can be applied to solve the inherent challenges raised by such environments. This paper discusses the benefits in using distributed transaction processing products. Design and implementation experiences using the Encina OLTP product from Transarc in the High Performance Storage System are presented in more detail as a case study for how this technology can be applied to mass storage systems designed for distributed environments.

  17. High Performance Drying System Using Absorption Temperature Amplifier

    NASA Astrophysics Data System (ADS)

    Nomura, Tomohiro; Nishimura, Nobuya; Yabushita, Akihiro; Kashiwagi, Takao

    It is highly essential to create a high performance drying technology from the viewpoint of energy conservation. Recently the drying process using superheated steam has received a great attention for improving the energy efficiency of the conventional air drying processes. Many other advantages of this superheated steam drying include its inert atmosphere, enhanced drying rate, improved product quality and easier control. This study presents a new concept of superheated steam drying in which the absorption temperature amplifier is effectively applied in order to recover the waste heat with high efficiency. A feature of this new drying system is that, owing to a closed circuit dryer, the consumption of heating energy decreases by approximately 50% of the conventional noncirculated one, and the superheated steam conventionally discharged so as to maintain the pressure of the dryer at an atmospheric one can be reused as heating energy for the generator of the absorption temperature amplifier. In the 1st report, thermal performances of this proposed system have been analyzed by a computer simulation developed for the solar-assisted absorption heat transformer model at the steady-state operating condition. It may be fair to conclude that this drying system satisfies the desired operating conditions, although it involves some problems to be solved further in detail in future.

  18. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  19. Coal-fired high performance power generating system

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO[sub x] SO [sub x] and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW[sub e] combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO[sub x] production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  20. SCEC Earthquake System Science Using High Performance Computing

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  1. Systems design of high-performance stainless steels

    NASA Astrophysics Data System (ADS)

    Campbell, Carelyn Elizabeth

    A systems approach has been applied to the design of high performance stainless steels. Quantitative property objectives were addressed integrating processing/structure/property relations with mechanistic models. Martensitic transformation behavior was described using the Olson-Cohen model for heterogeneous nucleation and the Ghosh-Olson solid-solution strengthening model for interfacial mobility, and incorporating an improved description of Fe-Co-Cr thermodynamic interaction. Coherent Msb2C precipitation in a BCC matrix was described, taking into account initial paraequilibrium with cementite. Using available SANS data, a composition dependent strain energy was calibrated and a composition independent interfacial energy was evaluated to predict the critical particle size versus the fraction of the reaction completed as input to strengthening theory. Multicomponent Pourbaix diagrams provided an effective tool for evaluating oxide stability; constrained equilibrium calculations correlated oxide stability to Cr enrichment in the oxide film to allow more efficient use of alloy Cr content. Multicomponent solidification simulations provided composition constraints to improve castability. Using the Thermo-Calc and DICTRA software packages, the models were integrated to design a carburizing, secondary-hardening martensitic stainless steel. Initial characterization of the prototype showed good agreement with the design models and achievement of the desired property objectives. Prototype evaluation confirmed the predicted martensitic transformation temperature and the desired carburizing response, achieving a case hardness of Rsb{c} 64 in the secondary-hardened condition without case primary carbides. Decarburization experiments suggest that the design core toughness objective (Ksb{IC} = 65 MPasurdm) can be achieved by reducing the core carbon level to 0.05 weight percent. To achieve the core toughness objective at high core strength levels requires further analysis of an

  2. Manufacturing Advantage: Why High-Performance Work Systems Pay Off.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Bailey, Thomas; Berg, Peter; Kalleberg, Arne L.

    A study examined the relationship between high-performance workplace practices and the performance of plants in the following manufacturing industries: steel, apparel, and medical electronic instruments and imaging. The multilevel research methodology combined the following data collection activities: (1) site visits; (2) collection of plant…

  3. High-Performance Acousto-Ultrasonic Scan System Being Developed

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Martin, Richard E.; Cosgriff, Laura M.; Gyekenyesi, Andrew L.; Kautz, Harold E.

    2003-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition and distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods, such as ultrasonic cscan, x-ray radiography, and thermographic inspection, which tend to be used primarily for discrete flaw detection. Throughout its history, AU has been used to inspect polymer matrix composites, metal matrix composites, ceramic matrix composites, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. This year, essential AU technology was reviewed. In addition, the basic hardware and software configuration for the scanner was developed, and preliminary results with the system were described. Mechanical and environmental loads applied to composite materials can cause distributed damage (as well as discrete defects) that plays a significant role in the degradation of physical properties. Such damage includes fiber/matrix debonding (interface failure), matrix microcracking, and fiber fracture and buckling. Investigations at the NASA Glenn Research Center have shown that traditional NDE scan inspection methods such as ultrasonic c-scan, x-ray imaging, and thermographic imaging tend to be more suited to discrete defect detection rather than the characterization of accumulated distributed microdamage in composites. Since AU is focused on assessing the distributed microdamage state of the material in between the sending and receiving transducers, it has proven to be quite suitable for assessing the relative composite material state. One major success story at Glenn with AU measurements has been the correlation between the ultrasonic decay rate obtained during AU

  4. High-Performance Acousto-Ultrasonic Scan System Being Developed

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Martin, Richard E.; Cosgriff, Laura M.; Gyekenyesi, Andrew L.; Kautz, Harold E.

    2003-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition and distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods, such as ultrasonic cscan, x-ray radiography, and thermographic inspection, which tend to be used primarily for discrete flaw detection. Throughout its history, AU has been used to inspect polymer matrix composites, metal matrix composites, ceramic matrix composites, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. This year, essential AU technology was reviewed. In addition, the basic hardware and software configuration for the scanner was developed, and preliminary results with the system were described. Mechanical and environmental loads applied to composite materials can cause distributed damage (as well as discrete defects) that plays a significant role in the degradation of physical properties. Such damage includes fiber/matrix debonding (interface failure), matrix microcracking, and fiber fracture and buckling. Investigations at the NASA Glenn Research Center have shown that traditional NDE scan inspection methods such as ultrasonic c-scan, x-ray imaging, and thermographic imaging tend to be more suited to discrete defect detection rather than the characterization of accumulated distributed micro-damage in composites. Since AU is focused on assessing the distributed micro-damage state of the material in between the sending and receiving transducers, it has proven to be quite suitable for assessing the relative composite material state. One major success story at Glenn with AU measurements has been the correlation between the ultrasonic decay rate obtained during AU

  5. High-Performance Scanning Acousto-Ultrasonic System

    NASA Technical Reports Server (NTRS)

    Roth, Don; Martin, Richard; Kautz, Harold; Cosgriff, Laura; Gyekenyesi, Andrew

    2006-01-01

    A high-performance scanning acousto-ultrasonic system, now undergoing development, is designed to afford enhanced capabilities for imaging microstructural features, including flaws, inside plate specimens of materials. The system is expected to be especially helpful in analyzing defects that contribute to failures in polymer- and ceramic-matrix composite materials, which are difficult to characterize by conventional scanning ultrasonic techniques and other conventional nondestructive testing techniques. Selected aspects of the acousto-ultrasonic method have been described in several NASA Tech Briefs articles in recent years. Summarizing briefly: The acousto-ultrasonic method involves the use of an apparatus like the one depicted in the figure (or an apparatus of similar functionality). Pulses are excited at one location on a surface of a plate specimen by use of a broadband transmitting ultrasonic transducer. The stress waves associated with these pulses propagate along the specimen to a receiving transducer at a different location on the same surface. Along the way, the stress waves interact with the microstructure and flaws present between the transducers. The received signal is analyzed to evaluate the microstructure and flaws. The specific variant of the acousto-ultrasonic method implemented in the present developmental system goes beyond the basic principle described above to include the following major additional features: Computer-controlled motorized translation stages are used to automatically position the transducers at specified locations. Scanning is performed in the sense that the measurement, data-acquisition, and data-analysis processes are repeated at different specified transducer locations in an array that spans the specimen surface (or a specified portion of the surface). A pneumatic actuator with a load cell is used to apply a controlled contact force. In analyzing the measurement data for each pair of transducer locations in the scan, the total

  6. Low-Cost, High-Performance Hall Thruster Support System

    NASA Technical Reports Server (NTRS)

    Hesterman, Bryce

    2015-01-01

    Colorado Power Electronics (CPE) has built an innovative modular PPU for Hall thrusters, including discharge, magnet, heater and keeper supplies, and an interface module. This high-performance PPU offers resonant circuit topologies, magnetics design, modularity, and a stable and sustained operation during severe Hall effect thruster current oscillations. Laboratory testing has demonstrated discharge module efficiency of 96 percent, which is considerably higher than current state of the art.

  7. High performance quarter-inch cartridge tape systems

    NASA Technical Reports Server (NTRS)

    Schwarz, Ted

    1993-01-01

    Within the established low cost structure of Data Cartridge drive technology, it is possible to achieve nearly 1 terrabyte (10(exp 12)) of data capacity and more than 1 Gbit/sec (greater than 100 Mbytes/sec) transfer rates. The desirability to place this capability within a single cartridge will be determined by the market. The 3.5 in. or smaller form factor may suffice to serve both the current Data Cartridge market and a high performance segment. In any case, Data Cartridge Technology provides a strong sustainable technology growth path in the 21st century.

  8. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  9. IMPULSE---an advanced, high performance nuclear thermal propulsion system

    SciTech Connect

    Petrosky, L.J.; Disney, R.K.; Mangus, J.D. ); Gunn, S.A.; Zweig, H.R. )

    1993-01-10

    IMPULSE is an advanced nuclear propulsion engine for future space missions based on a novel conical fuel. Fuel assemblies are formed by stacking a series of truncated (U, Zr)C cones with non-fueled lips. Hydrogen flows radially inward between the cones to a central plenum connected to a high performance bell nozzle. The reference IMPULSE engine rated at 75,000 lb thrust and 1800 MWt weighs 1360 kg and is 3.65 meters in height and 81 cm in diameter. Specific impulse is estimated to be 1000 for a 15 minute life at full power. If longer life times are required, the operating temperature can be reduced with a concomitant decrease in specific impulse. Advantages of this concept include: well defined coolant paths without outlet flow restrictions; redundant orificing; very low thermal gradients and hence, thermal stresses, across the fuel elements; and reduced thermal stresses because of the truncated conical shape of the fuel elements.

  10. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  11. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    Requirements are carefully described in descriptions of systems to be acquired but often there is no requirement to provide measurements and performance monitoring to ensure that requirements are met over the long term after acceptance. A set of measurements for various UNIX-based systems will be available at the 1992 Goddard Conference on Mass Storage Systems and Technologies. The authors invite others to contribute to the set of measurements. The framework for presenting the measurements of supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them are given. Production control and database systems are also included. Though other applications and third party software systems are not addressed, it is important to measure them as well. The capability to integrate measurements from all these components from different vendors, and from the third party software systems was recognized and there are efforts to standardize a framework to do this. The measurement activity falls into the domain of management standards. Standards work is ongoing for Open Systems Interconnection (OSI) systems management; AT&T, Digital, and Hewlett-Packard are developing management systems based on this architecture even though it is not finished. Another effort is in the UNIX International Performance Management Working Group. In addition, there are the Open Systems Foundation's Distributed Management Environment and the Object Management Group. A paper comparing the OSI systems management model and the Object Management Group model has been written. The IBM world has had a capability for measurement for various IBM systems since the 1970's and different vendors were able to develop tools for analyzing and viewing these measurements. Since IBM was the only vendor, the user groups were able to lobby IBM for the kinds of measurements needed. In the UNIX world of multiple vendors, a common set of measurements will not be as easy to get.

  12. High-performance multimedia encryption system based on chaos.

    PubMed

    Hasimoto-Beltrán, Rogelio

    2008-06-01

    Current chaotic encryption systems in the literature do not fulfill security and performance demands for real-time multimedia communications. To satisfy these demands, we propose a generalized symmetric cryptosystem based on N independently iterated chaotic maps (N-map array) periodically perturbed with a three-level perturbation scheme and a double feedback (global and local) to increase the system's robustness to attacks. The first- and second-level perturbations make cryptosystem extremely sensitive to changes in the plaintext data since the system's output itself (ciphertext global feedback) is used in the perturbation process. Third-level perturbation is a system reset, in which the system-key and chaotic maps are replaced for totally new values. An analysis of the proposed scheme regarding its vulnerability to attacks, statistical properties, and implementation performance is presented. To the best of our knowledge we provide a secure cryptosystem with one of the highest levels of performance for real-time multimedia communications. PMID:18601477

  13. High Performance Image Processing And Laser Beam Recording System

    NASA Astrophysics Data System (ADS)

    Fanelli, Anthony R.

    1980-09-01

    The article is meant to provide the digital image recording community with an overview of digital image processing, and recording. The Digital Interactive Image Processing System (DIIPS) was assembled by ESL for Air Force Systems Command under ROME AIR DEVELOPMENT CENTER's guidance. The system provides the capability of mensuration and exploitation of digital imagery with both mono and stereo digital images as inputs. This development provided for system design, basic hardware, software and operational procedures to enable the Air Force's System Command photo analyst to perform digital mensuration and exploitation of stereo digital images as inputs. The engineering model was based on state-of-the-art technology and to the extent possible off-the-shelf hardware and software. A LASER RECORDER was also developed for the DIIPS Systems and is known as the Ultra High Resolution Image Recorder (UHRIR). The UHRIR is a prototype model that will enable the Air Force Systems Command to record computer enhanced digital image data on photographic film at high resolution with geometric and radiometric distortion minimized.

  14. A High Performance Virtualized Seismic Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Eakins, J. A.; Reyes, J. C.; Franke, M.; Sánchez, R. F.; Cortes Muñoz, P.; Busby, R. W.; Vernon, F.; Barrientos, S. E.

    2014-12-01

    As part of a collaborative effort with the Incorporated Research Institutions for Seismology, a virtualized seismic data acquisition and processing system was recently installed at the Centro Sismológical Nacional (CSN) at the Universidad de Chile for use as part of their early warning system. Using lessons learned from the Earthscope Transportable Array project, the design of this system consists of dedicated acquisition, processing and data distribution nodes hosted on a high availability hypervisor cluster. Data is exchanged with the IRIS Data Management Center and the existing processing infrastructure at the CSN. The processing nodes are backed by 20 TB of hybrid Solid State Disk (SSD) and spinning disk storage with automatic tiering of data between the disks. As part of the installation, best practices for station metadata maintenance were discussed and applied to the existing IRIS sponsored stations, as well as over 30 new stations being added to the early warning network. Four virtual machines (VM) were configured with distinctive tasks. Two VMs are dedicated to data acquisition, one to the real-time data processing, and one as relay between data acquisition and processing systems with services for the existing earthquake revision and dissemination infrastructure. The first acquisition system connects directly to Basalt dataloggers and Q330 digitizers, managing them, and acquiring seismic data as well as state-of-health (SOH) information. As newly deployed stations become available (beyond the existing 30), this VM is configured to acquire data from them and incorporate the additonal data. The second acquisition system imports the legacy network of CSN and data streams provided by other data centers. The processing system is connected to the production and archive databases. The relay system merges all incoming data streams and obtains the processing results. Data and processing packets are available for subsequent review and dissemination by the CSN. Such

  15. High-performance image database system for remote sensing

    NASA Astrophysics Data System (ADS)

    Shock, Carter T.; Chang, Chialin; Davis, Larry S.; Goward, Samuel N.; Saltz, Joel H.; Sussman, Alan D.

    1996-02-01

    We present the design of an image database system for remotely sensed imagery. The system stores and serves level 1B remotely sensed data, providing users with a flexible and efficient means for specifying and obtaining image-like products on either a global or a local scale. We have developed both parallel and sequential versions of the system; the parallel version uses the CHAOS++ and Jovian libraries, developed at the University of Maryland as part of an NSF grand challenge project, to support parallel object oriented programming and parallel I/O, respectively.

  16. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  17. Personal communication system combines high performance with miniaturization

    NASA Technical Reports Server (NTRS)

    Atlas, N. D.

    1967-01-01

    Personal communication system provides miniaturized components that incorporate high level signal characteristics plus noise rejection in both microphone and earphone circuitry. The microphone is designed to overcome such spacecraft flight problems as size, ambient noise level, and RF interference.

  18. High Performance Drying System Using Absorption Temperature Amplifier

    NASA Astrophysics Data System (ADS)

    Nishimura, Nobuya; Nomura, Tomohiro; Yabushita, Akihiro; Kashiwagi, Takao

    A computer simulation has been developed on transient drying process in order to predict the dynamic thermal performance of a new superheated steam drying system using an absorption type temperature amplifier as a steam superheater. A feature of this drying system is that one can reuse the exhausted superheated stream conventionally discharged from the dryer as a driving heat source for the generator in this heat pump. But in the transient drying process, the evaporation of moisture sharply decreases. Accordingly, it is hardly expected to reuse an exhausted superheated steam as heating source for the generator. 80 the effects of this exhausted superheated steam and of changes in hot water and the cooling water temperatures were mainly investigated checking whether this drying system can be driven directly by the low level energy of sun or waste heat. Furthermore, the system performances of this drying system were evaluated on a qualitative-basis by using the exergy efficiency. The results show that, under the transient drying conditions, the temperature boost of superheated steam is possible at a high temperature and thus the absorption type temperature amplifier can be an effective steam superheater system.

  19. Miniaturized high-performance starring thermal imaging system

    NASA Astrophysics Data System (ADS)

    Cabanski, Wolfgang A.; Breiter, Rainer; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Ennenga, L.; Lipinski, Ulrich M.; Wehrhahn, T.

    2000-07-01

    A high resolution thermal imaging system was developed based on a 384 X 288 mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) detection module with a 2 X 2 microscan for improved geometrical resolution. Primary design goal was a long identification range of 3 km and high system performance for adverse weather conditions achieved by a system with small entrance pupil and minimized dimensions to fit for integration in existing apertures of armored vehicles, reconnaissance systems and stabilized platforms. A staring FPA module with its potential for long integration times together with a microscan for improved geometrical resolution provides the answers best fit to these requirements. A robust microscanner was developed to fit for military requirements and integrated with AIM's 384 X 288 MCT MWIR module and data processing. The modules allow for up to 2 ms integration time with 25 Hz frame rate and output a 768 X 576 high resolution CCIR standard image. The video image processing (VIP) provides the calculation power for scene based self learning nonuniformity correction (NUC) algorithms to save calibration sources. This NUC algorithm allows take into account non linear effects for unsurpassed performance in highly dynamic scenes. The detection module and VIP are designed to interface with STN's mature system electronics, used e.g. in hundreds of OPHELIOS thermal camera sets fielded. The system electronics provides a lot of different interface features like double serial control bus (CANBUS) interface, analog and digital outputs as well as different video outputs. The integrated graphic generation part allows to put advanced graphic overlays to the thermal image and also to external video signals via the video input feature. This electronics provides the power supply for the whole thermal imaging system as well as different processor controlled algorithms for field of view or zoom drives, focus drives, athermalization and temperature control of the FLIR. A

  20. American Models of High-Performance Work Systems.

    ERIC Educational Resources Information Center

    Appelbaum, Eileen; Batt, Rosemary

    1993-01-01

    Looks at work systems that draw on quality engineering and management concepts and use incentives. Discusses how some U.S. companies improve performance and maintain high quality. Suggests that the federal government strategy should include measures to support change in production processes and promote efficient factors of production. (JOW)

  1. Nanostructured microfluidic digestion system for rapid high-performance proteolysis

    PubMed Central

    Cheng, Gong; Hao, Si-Jie; Yu, Xu

    2014-01-01

    A novel microfluidic protein digestion system with nanostructured and bioactive inner surface was constructed by an easy biomimetic self-assembly strategy for rapid and effective proteolysis in 2 minutes, which is faster than the conventional overnight digestion methods. It is expected that this work would contribute to rapid online digestion in future high-throughput proteomics. PMID:25511010

  2. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  3. Low cost, high performance, self-aligning miniature optical systems

    PubMed Central

    Kester, Robert T.; Christenson, Todd; Kortum, Rebecca Richards; Tkaczyk, Tomasz S.

    2009-01-01

    The most expensive aspects in producing high quality miniature optical systems are the component costs and long assembly process. A new approach for fabricating these systems that reduces both aspects through the implementation of self-aligning LIGA (German acronym for lithographie, galvanoformung, abformung, or x-ray lithography, electroplating, and molding) optomechanics with high volume plastic injection molded and off-the-shelf glass optics is presented. This zero alignment strategy has been incorporated into a miniature high numerical aperture (NA = 1.0W) microscope objective for a fiber confocal reflectance microscope. Tight alignment tolerances of less than 10 μm are maintained for all components that reside inside of a small 9 gauge diameter hypodermic tubing. A prototype system has been tested using the slanted edge modulation transfer function technique and demonstrated to have a Strehl ratio of 0.71. This universal technology is now being developed for smaller, needle-sized imaging systems and other portable point-of-care diagnostic instruments. PMID:19543344

  4. Research in the design of high-performance reconfigurable systems

    NASA Technical Reports Server (NTRS)

    Mcewan, S. D.; Spry, A. J.

    1985-01-01

    Computer aided design and computer aided manufacturing have the potential for greatly reducing the cost and lead time in the development of VLSI components. This potential paves the way for the design and fabrication of a wide variety of economically feasible high level functional units. It was observed that current computer systems have only a limited capacity to absorb new VLSI component types other than memory, microprocessors, and a relatively small number of other parts. The first purpose is to explore a system design which is capable of effectively incorporating a considerable number of VLSI part types and will both increase the speed of computation and reduce the attendant programming effort. A second purpose is to explore design techniques for VLSI parts which when incorporated by such a system will result in speeds and costs which are optimal. The proposed work may lay the groundwork for future efforts in the extensive simulation and measurements of the system's cost effectiveness and lead to prototype development.

  5. A High Performance Content Based Recommender System Using Hypernym Expansion

    Energy Science and Technology Software Center (ESTSC)

    2015-10-20

    There are two major limitations in content-based recommender systems, the first is accurately measuring the similarity of preferred documents to a large set of general documents, and the second is over-specialization which limits the "interesting" documents recommended from a general document set. To address these issues, we propose combining linguistic methods and term frequency methods to improve overall performance and recommendation.

  6. Fitting modular reconnaissance systems into modern high-performance aircraft

    NASA Astrophysics Data System (ADS)

    Stroot, Jacquelyn R.; Pingel, Leslie L.

    1990-11-01

    The installation of the Advanced Tactical Air Reconnaissance System (ATARS) in the F/A-18D(RC) presented a complex set of design challenges. At the time of the F/A-18D(RC) ATARS option exercise, the design and development of the ATARS subsystems and the parameters of the F/A-18D(RC) were essentially fixed. ATARS is to be installed in the gun bay of the F/A-18D(RC), taking up no additional room, nor adding any more weight than what was removed. The F/A-18D(RC) installation solution required innovations in mounting, cooling, and fit techniques, which made constant trade study essential. The successful installation in the F/A-18D(RC) is the result of coupling fundamental design engineering with brainstorming and nonstandard approaches to every situation. ATARS is sponsored by the Aeronautical Systems Division, Wright-Patterson AFB, Ohio. The F/A-18D(RC) installation is being funded to the Air Force by the Naval Air Systems Command, Washington, D.C.

  7. Resolution of a High Performance Cavity Beam Position Monitor System

    SciTech Connect

    Walston, S; Chung, C; Fitsos, P; Gronberg, J; Ross, M; Khainovski, O; Kolomensky, Y; Loscutoff, P; Slater, M; Thomson, M; Ward, D; Boogert, S; Vogel, V; Meller, R; Lyapin, A; Malton, S; Miller, D; Frisch, J; Hinton, S; May, J; McCormick, D; Smith, S; Smith, T; White, G; Orimoto, T; Hayano, H; Honda, Y; Terunuma, N; Urakawa, J

    2005-09-12

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved - ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  8. Resolution of a High Performance Cavity Beam Positron Monitor System

    SciTech Connect

    Walston, S.; Chung, C.; Fitsos, P.; Gronberg, J.; Ross, M.; Khainovski, O.; Kolomensky, Y.; Loscutoff, P.; Slater, M.; Thomson, M.; Ward, D.; Boogert, S.; Vogel, V.; Meller, R.; Lyapin, A.; Malton, S.; Miller, D.; Frisch, J.; Hinton, S.; May, J.; McCormick, D.; /SLAC /Caltech /KEK, Tsukuba

    2007-07-06

    International Linear Collider (ILC) interaction region beam sizes and component position stability requirements will be as small as a few nanometers. It is important to the ILC design effort to demonstrate that these tolerances can be achieved--ideally using beam-based stability measurements. It has been estimated that RF cavity beam position monitors (BPMs) could provide position measurement resolutions of less than one nanometer and could form the basis of the desired beam-based stability measurement. We have developed a high resolution RF cavity BPM system. A triplet of these BPMs has been installed in the extraction line of the KEK Accelerator Test Facility (ATF) for testing with its ultra-low emittance beam. A metrology system for the three BPMs was recently installed. This system employed optical encoders to measure each BPM's position and orientation relative to a zero-coefficient of thermal expansion carbon fiber frame and has demonstrated that the three BPMs behave as a rigid-body to less than 5 nm. To date, we have demonstrated a BPM resolution of less than 20 nm over a dynamic range of +/- 20 microns.

  9. High performance graphical data trending in a distributed system

    NASA Astrophysics Data System (ADS)

    Maureira, Cristián; Hoffstadt, Arturo; López, Joao; Troncoso, Nicolás; Tobar, Rodrigo; von Brand, Horst H.

    2010-07-01

    Trending near real-time data is a complex task, specially in distributed environments. This problem was typically tackled in financial and transaction systems, but it now applies to its utmost in other contexts, such as hardware monitoring in large-scale projects. Data handling requires subscription to specific data feeds that need to be implemented avoiding replication, and rate of transmission has to be assured. On the side of the graphical client, rendering needs to be fast enough so it may be perceived as real-time processing and display. ALMA Common Software (ACS) provides a software infrastructure for distributed projects which may require trending large volumes of data. For theses requirements ACS offers a Sampling System, which allows sampling selected data feeds at different frequencies. Along with this, it provides a graphical tool to plot the collected information, which needs to perform as well as possible. Currently there are many graphical libraries available for data trending. This imposes a problem when trying to choose one: It is necessary to know which has the best performance, and which combination of programming language and library is the best decision. This document analyzes the performance of different graphical libraries and languages in order to present the optimal environment when writing or re-factoring an application using trending technologies in distributed systems. To properly address the complexity of the problem, a specific set of alternative was pre-selected, including libraries in Java and Python, languages which are part of ACS. A stress benchmark will be developed in a simulated distributed environment using ACS in order to test the trending libraries.

  10. High-performance space shuttle auxiliary propellant valve system

    NASA Technical Reports Server (NTRS)

    Smith, G. M.

    1973-01-01

    Several potential valve closures for the space shuttle auxiliary propulsion system (SS/APS) were investigated analytically and experimentally in a modeling program. The most promising of these were analyzed and experimentally evaluated in a full-size functional valve test fixture of novel design. The engineering investigations conducted for both model and scale evaluations of the SS/APS valve closures and functional valve fixture are described. Preliminary designs, laboratory tests, and overall valve test fixture designs are presented, and a final recommended flightweight SS/APS valve design is presented.

  11. Performance analysis of memory hierachies in high performance systems

    SciTech Connect

    Yogesh, A.

    1993-07-01

    This thesis studies memory bandwidth as a performance predictor of programs. The focus of this work is on computationally intensive programs. These programs are the most likely to access large amounts of data, stressing the memory system. Computationally intensive programs are also likely to use highly optimizing compilers to produce the fastest executables possible. Methods to reduce the amount of data traffic by increasing the average number of references to each item while it resides in the cache are explored. Increasing the average number of references to each cache item reduces the number of memory requests. Chapter 2 describes the DLX architecture. This is the architecture on which all the experiments were performed. Chapter 3 studies memory moves as a performance predictor for a group of application programs. Chapter 4 introduces a model to study the performance of programs in the presence of memory hierarchies. Chapter 5 explores some compiler optimizations that can help increase the references to each item while it resides in the cache.

  12. Building High-Performing and Improving Education Systems. Systems and Structures: Powers, Duties and Funding. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    This Review looks at the way high-performing and improving education systems share out power and responsibility. Resources--in the form of funding, capital investment or payment of salaries and other ongoing costs--are some of the main levers used to make policy happen, but are not a substitute for well thought-through and appropriate policy…

  13. NFS as a user interface to a high-performance data system

    SciTech Connect

    Mercier, C.W.

    1991-01-01

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  14. Research into the interaction between high performance and cognitive skills in an intelligent tutoring system

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.

    1991-01-01

    Two intelligent tutoring systems were developed. These tutoring systems are being used to study the effectiveness of intelligent tutoring systems in training high performance tasks and the interrelationship of high performance and cognitive tasks. The two tutoring systems, referred to as the Console Operations Tutors, were built using the same basic approach to the design of an intelligent tutoring system. This design approach allowed researchers to more rapidly implement the cognitively based tutor, the OMS Leak Detect Tutor, by using the foundation of code generated in the development of the high performance based tutor, the Manual Select Keyboard (MSK). It is believed that the approach can be further generalized to develop a generic intelligent tutoring system implementation tool.

  15. High performance solar desiccant cooling system: Performance evaluation and research recommendations

    NASA Astrophysics Data System (ADS)

    Schlepp, D. R.; Schultz, K. J.

    1984-09-01

    The current status of solar desiccant cooling was assessed and recommendations were made for continued research to develop high performance systems competitive with conventional cooling systems. Solid desiccant, liquid desiccant, and hybrid systems combining desiccant dehumidifiers with vapor compressor units are considered. Currently, all desiccant systems fall somewhat short of being competitive with conventional systems. Hybrid systems appear to have the greatest potential in the short term. Solid systems are close to meeting performance goals. Development of high performance solid desiccant dehumidifiers based on parallel passage designs should be pursued. Liquid system collector/generators and efficient absorbers should receive attention. Model development is also indicated. Continued development by hybrid systems is directly tied to the above work.

  16. New architectures to reduce I/O bottlenecks in high-performance systems

    SciTech Connect

    Coleman, S.S.; Watson, R.W.

    1993-01-01

    Large commercial and scientific applications are straining input/output and storage facilities, a condition compounded by new networking and distributed-system technology, and by supercomputer, massively-parallel, and high-performance workstation architectures. This paper reviews large-scale application 1/O requirements that are driving the need for high-performance, distributed, hierarchical storage, and then discusses the emerging shift to network-connected devices and third-party protocol architectures to meet these requirements. It illustrates the discussion with actual implementations and with standards work under way in the IEEE Storage System Standards Working Group.

  17. An intelligent tutoring system for the investigation of high performance skill acquisition

    NASA Technical Reports Server (NTRS)

    Fink, Pamela K.; Herren, L. Tandy; Regian, J. Wesley

    1991-01-01

    The issue of training high performance skills is of increasing concern. These skills include tasks such as driving a car, playing the piano, and flying an aircraft. Traditionally, the training of high performance skills has been accomplished through the use of expensive, high-fidelity, 3-D simulators, and/or on-the-job training using the actual equipment. Such an approach to training is quite expensive. The design, implementation, and deployment of an intelligent tutoring system developed for the purpose of studying the effectiveness of skill acquisition using lower-cost, lower-physical-fidelity, 2-D simulation. Preliminary experimental results are quite encouraging, indicating that intelligent tutoring systems are a cost-effective means of training high performance skills.

  18. High Performance Work System, HRD Climate and Organisational Performance: An Empirical Study

    ERIC Educational Resources Information Center

    Muduli, Ashutosh

    2015-01-01

    Purpose: This paper aims to study the relationship between high-performance work system (HPWS) and organizational performance and to examine the role of human resource development (HRD) Climate in mediating the relationship between HPWS and the organizational performance in the context of the power sector of India. Design/methodology/approach: The…

  19. High-Performance Work Systems and School Effectiveness: The Case of Malaysian Secondary Schools

    ERIC Educational Resources Information Center

    Maroufkhani, Parisa; Nourani, Mohammad; Bin Boerhannoeddin, Ali

    2015-01-01

    This study focuses on the impact of high-performance work systems on the outcomes of organizational effectiveness with the mediating roles of job satisfaction and organizational commitment. In light of the importance of human resource activities in achieving organizational effectiveness, we argue that higher employees' decision-making capabilities…

  20. Unlocking the Black Box: Exploring the Link between High-Performance Work Systems and Performance

    ERIC Educational Resources Information Center

    Messersmith, Jake G.; Patel, Pankaj C.; Lepak, David P.

    2011-01-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level…

  1. High Performance Work Systems and Organizational Outcomes: The Mediating Role of Information Quality.

    ERIC Educational Resources Information Center

    Preuss, Gil A.

    2003-01-01

    A study of the effect of high-performance work systems on 935 nurses and 182 nurses aides indicated that quality of decision-making information depends on workers' interpretive skills and partially mediated effects of work design and total quality management on organizational performance. Providing relevant knowledge and opportunities to use…

  2. High Performance Variable Speed Drive System and Generating System with Doubly Fed Machines

    NASA Astrophysics Data System (ADS)

    Tang, Yifan

    Doubly fed machines are another alternative for variable speed drive systems. The doubly fed machines, including doubly fed induction machine, self-cascaded induction machine and doubly excited brushless reluctance machine, have several attractive advantages for variable speed drive applications, the most important one being the significant cost reduction with a reduced power converter rating. With a better understanding, improved machine design, flexible power converters and innovated controllers, the doubly fed machines could favorably compete for many applications, which may also include variable speed power generations. The goal of this research is to enhance the attractiveness of the doubly fed machines for both variable speed drive and variable speed generator applications. Recognizing that wind power is one of the favorable clean, renewable energy sources that can contribute to the solution to the energy and environment dilemma, a novel variable-speed constant-frequency wind power generating system is proposed. By variable speed operation, energy capturing capability of the wind turbine is improved. The improvement can be further enhanced by effectively utilizing the doubly excited brushless reluctance machine in slip power recovery configuration. For the doubly fed machines, a stator flux two -axis dynamic model is established, based on which a flexible active and reactive power control strategy can be developed. High performance operation of the drive and generating systems is obtained through advanced control methods, including stator field orientation control, fuzzy logic control and adaptive fuzzy control. System studies are pursued through unified modeling, computer simulation, stability analysis and power flow analysis of the complete drive system or generating system with the machine, the converter and the control. Laboratory implementations and tested results with a digital signal processor system are also presented.

  3. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  4. Unlocking the black box: exploring the link between high-performance work systems and performance.

    PubMed

    Messersmith, Jake G; Patel, Pankaj C; Lepak, David P; Gould-Williams, Julian

    2011-11-01

    With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance. PMID:21787040

  5. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  6. The parallel I/O architecture of the high performance storage system (HPSS). Revision 1

    SciTech Connect

    Watson, R.W.; Coyne, R.A.

    1995-04-01

    Datasets up to terabyte size and petabyte capacities have created a serious imbalance between I/O and storage system performance and system functionality. One promising approach is the use of parallel data transfer techniques for client access to storage, peripheral-to-peripheral transfers, and remote file transfers. This paper describes the parallel I/O architecture and mechanisms, Parallel Transport Protocol (PTP), parallel FTP, and parallel client Application Programming Interface (API) used by the High Performance Storage System (HPSS). Parallel storage integration issues with a local parallel file system are also discussed.

  7. A tutorial on the construction of high-performance resolution/paramodulation systems

    SciTech Connect

    Butler, R.; Overbeek, R.

    1990-09-01

    Over the past 25 years, researchers have written numerous deduction systems based on resolution and paramodulation. Of these systems, a very few have been capable of generating and maintaining a formula database'' containing more than just a few thousand clauses. These few systems were used to explore mechanisms for rapidly extracting limited subsets of relevant'' clauses. We have written this tutorial to reflect some of the best ideas that have emerged and to cast them in a form that makes them easily accessible to students wishing to write their own high-performance systems. 4 refs.

  8. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application

  9. DScan - a high-performance digital scanning system for entomological collections.

    PubMed

    Schmidt, Stefan; Balke, Michael; Lafogler, Stefan

    2012-01-01

    Here we describe a high-performance imaging system for creating high-resolution images of whole insect drawers. All components of the system are industrial standard and can be adapted to meet the specific needs of entomological collections. A controlling unit allows the setting of imaging area (drawer size), step distance between individual images, number of images, image resolution, and shooting sequence order through a set of parameters. The system is highly configurable and can be used with a wide range of different optical hardware and image processing software. PMID:22859887

  10. The NetLogger Methodology for High Performance Distributed Systems Performance Analysis

    SciTech Connect

    Tierney, Brian; Johnston, William; Crowley, Brian; Hoo, Gary; Brooks, Chris; Gunter, Dan

    1999-12-23

    The authors describe a methodology that enables the real-time diagnosis of performance problems in complex high-performance distributed systems. The methodology includes tools for generating precision event logs that can be used to provide detailed end-to-end application and system level monitoring; a Java agent-based system for managing the large amount of logging data; and tools for visualizing the log data and real-time state of the distributed system. The authors developed these tools for analyzing a high-performance distributed system centered around the transfer of large amounts of data at high speeds from a distributed storage server to a remote visualization client. However, this methodology should be generally applicable to any distributed system. This methodology, called NetLogger, has proven invaluable for diagnosing problems in networks and in distributed systems code. This approach is novel in that it combines network, host, and application-level monitoring, providing a complete view of the entire system.

  11. Damage-Mitigating Control of Space Propulsion Systems for High Performance and Extended Life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang

    1994-01-01

    A major goal in the control of complex mechanical system such as spacecraft rocket engine's advanced aircraft, and power plants is to achieve high performance with increased reliability, component durability, and maintainability. The current practice of decision and control systems synthesis focuses on improving performance and diagnostic capabilities under constraints that often do not adequately represent the materials degradation. In view of the high performance requirements of the system and availability of improved materials, the lack of appropriate knowledge about the properties of these materials will lead to either less than achievable performance due to overly conservative design, or over-straining of the structure leading to unexpected failures and drastic reduction of the service life. The key idea in this report is that a significant improvement in service life could be achieved by a small reduction in the system dynamic performance. The major task is to characterize the damage generation process, and then utilize this information in a mathematical form to synthesize a control law that would meet the system requirements and simultaneously satisfy the constraints that are imposed by the material and structural properties of the critical components. The concept of damage mitigation is introduced for control of mechanical systems to achieve high performance with a prolonged life span. A model of fatigue damage dynamics is formulated in the continuous-time setting, instead of a cycle-based representation, for direct application to control systems synthesis. An optimal control policy is then formulated via nonlinear programming under specified constraints of the damage rate and accumulated damage. The results of simulation experiments for the transient upthrust of a bipropellant rocket engine are presented to demonstrate efficacy of the damage-mitigating control concept.

  12. A flexible and inexpensive high-performance auditory evoked response recording system appropriate for research purposes.

    PubMed

    Valderrama, Joaquin T; de la Torre, Angel; Alvarez, Isaac; Segura, Jose Carlos; Sainz, Manuel; Vargas, Jose Luis

    2014-10-01

    Recording auditory evoked responses (AER) is done not only in hospitals and clinics worldwide to detect hearing impairments and estimate hearing thresholds, but also in research centers to understand and model the mechanisms involved in the process of hearing. This paper describes a high-performance, flexible, and inexpensive AER recording system. A full description of the hardware and software modules that compose the AER recording system is provided. The performance of this system was evaluated by conducting five experiments with both real and artificially synthesized auditory brainstem response and middle latency response signals at different intensity levels and stimulation rates. The results indicate that the flexibility of the described system is appropriate to record AER signals under several recording conditions. The AER recording system described in this article is a flexible and inexpensive high-performance AER recording system. This recording system also incorporates a platform through which users are allowed to implement advanced signal processing methods. Moreover, its manufacturing cost is significantly lower than that of other commercially available alternatives. These advantages may prove useful in many research applications in audiology. PMID:24870606

  13. Image Processor Electronics (IPE): The High-Performance Computing System for NASA SWIFT Mission

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang H.; Settles, Beverly A.

    2003-01-01

    Gamma Ray Bursts (GRBs) are believed to be the most powerful explosions that have occurred in the Universe since the Big Bang and are a mystery to the scientific community. Swift, a NASA mission that includes international participation, was designed and built in preparation for a 2003 launch to help to determine the origin of Gamma Ray Bursts. Locating the position in the sky where a burst originates requires intensive computing, because the duration of a GRB can range between a few milliseconds up to approximately a minute. The instrument data system must constantly accept multiple images representing large regions of the sky that are generated by sixteen gamma ray detectors operating in parallel. It then must process the received images very quickly in order to determine the existence of possible gamma ray bursts and their locations. The high-performance instrument data computing system that accomplishes this is called the Image Processor Electronics (IPE). The IPE was designed, built and tested by NASA Goddard Space Flight Center (GSFC) in order to meet these challenging requirements. The IPE is a small size, low power and high performing computing system for space applications. This paper addresses the system implementation and the system hardware architecture of the IPE. The paper concludes with the IPE system performance that was measured during end-to-end system testing.

  14. BurstMem: A High-Performance Burst Buffer System for Scientific Applications

    SciTech Connect

    Wang, Teng; Oral, H Sarp; Wang, Yandong; Settlemyer, Bradley W; Atchley, Scott; Yu, Weikuan

    2014-01-01

    The growth of computing power on large-scale sys- tems requires commensurate high-bandwidth I/O system. Many parallel file systems are designed to provide fast sustainable I/O in response to applications soaring requirements. To meet this need, a novel system is imperative to temporarily buffer the bursty I/O and gradually flush datasets to long-term parallel file systems. In this paper, we introduce the design of BurstMem, a high- performance burst buffer system. BurstMem provides a storage framework with efficient storage and communication manage- ment strategies. Our experiments demonstrate that BurstMem is able to speed up the I/O performance of scientific applications by up to 8.5 on leadership computer systems.

  15. High-performance gimbal control for self-protection weapon systems

    NASA Astrophysics Data System (ADS)

    Downs, James; Smith, Stephen A.; Schwickert, Jim; Stockum, Larry A.

    1998-07-01

    The gimbal and control system for a high performance, acquisition, tracking and pointing system is described. This system provides full hemispherical coverage, precision stabilization, rapid position response, and precision laser pointing. The high performance laser pointing system (HPLPS) receives position and rate cues form an integrated threat- warning-system, slews to the predicted target location, acquires, tracks, and designates the target. The azimuth and elevation axes of the HPLPS are inertially stabilized with independent, high bandwidth, inertial rate loops. The cue to position control loop is implemented using a time-optimal control algorithm which slews each axis of the platform to the predicted target location with high accuracy and zero overshoot in minimum time. After cuing to position,m auto- track mode engages with a type 4, high bandwidth track loop. Track loop integrators are initialized to keep the platform moving at the cued target rate as control transfers from position cue to auto-track mode. After initially tracking with a narrow field of view tracking sensor, an active laser track is performed with a narrower field of view laser-spot- tracking sensor. The gimbal electronics use a Texas Instruments TMS320C30 digital signal processor and proprietary software executive to achieve the performance required for the 960 Hz control loop sample rates. Optical encoder, resolver, and high bandwidth fiver-optic-gyro sensors are used. Linear amplifiers drive the azimuth and elevation mirror motors and a sine wave commutated amplifier drives the outer gimbal motor.

  16. Coal-fired high performance power generating system. Quarterly progress report, January 1--March 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    This report covers work carried out under Task 2, Concept Definition and Analysis, and Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: > 47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FHTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The cycle optimization effort has brought about several revisions to the system configuration resulting from: (1) the use of Illinois No. 6 coal instead of Utah Blind Canyon; (2) the use of coal rather than methane as a reburn fuel; (3) reducing radiant section outlet temperatures to 1700F (down from 1800F); and (4) the need to use higher performance (higher cost) steam cycles to offset losses introduced as more realistic operating and construction constraints are identified.

  17. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  18. HPNAIDM: The High-Performance Network Anomaly/Intrusion Detection and Mitigation System

    SciTech Connect

    Chen, Yan

    2013-12-05

    Identifying traffic anomalies and attacks rapidly and accurately is critical for large network operators. With the rapid growth of network bandwidth, such as the next generation DOE UltraScience Network, and fast emergence of new attacks/virus/worms, existing network intrusion detection systems (IDS) are insufficient because they: • Are mostly host-based and not scalable to high-performance networks; • Are mostly signature-based and unable to adaptively recognize flow-level unknown attacks; • Cannot differentiate malicious events from the unintentional anomalies. To address these challenges, we proposed and developed a new paradigm called high-performance network anomaly/intrustion detection and mitigation (HPNAIDM) system. The new paradigm is significantly different from existing IDSes with the following features (research thrusts). • Online traffic recording and analysis on high-speed networks; • Online adaptive flow-level anomaly/intrusion detection and mitigation; • Integrated approach for false positive reduction. Our research prototype and evaluation demonstrate that the HPNAIDM system is highly effective and economically feasible. Beyond satisfying the pre-set goals, we even exceed that significantly (see more details in the next section). Overall, our project harvested 23 publications (2 book chapters, 6 journal papers and 15 peer-reviewed conference/workshop papers). Besides, we built a website for technique dissemination, which hosts two system prototype release to the research community. We also filed a patent application and developed strong international and domestic collaborations which span both academia and industry.

  19. High-performance electronics for time-of-flight PET systems.

    PubMed

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively. PMID:24575149

  20. High-performance electronics for time-of-flight PET systems

    PubMed Central

    Choong, W.-S.; Peng, Q.; Vu, C.Q.; Turko, B.T.; Moses, W.W.

    2014-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC’s CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC’s CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr3 crystals respectively. PMID:24575149

  1. The parallel I/O architecture of the High Performance Storage System (HPSS)

    SciTech Connect

    Watson, R.W.; Coyne, R.A.

    1995-02-01

    Rapid improvements in computational science, processing capability, main memory sizes, data collection devices, multimedia capabilities and integration of enterprise data are producing very large datasets (10s-100s of gigabytes to terabytes). This rapid growth of data has resulted in a serious imbalance in I/O and storage system performance and functionality. One promising approach to restoring balanced I/O and storage system performance is use of parallel data transfer techniques for client access to storage, device-to-device transfers, and remote file transfers. This paper describes the parallel I/O architecture and mechanisms, Parallel Transport Protocol, parallel FIP, and parallel client Application Programming Interface (API) used by the High Performance Storage System (HPSS). Parallel storage integration issues with a local parallel file system are also discussed.

  2. a High-Performance Method for Simulating Surface Rainfall-Runoff Dynamics Using Particle System

    NASA Astrophysics Data System (ADS)

    Zhang, Fangli; Zhou, Qiming; Li, Qingquan; Wu, Guofeng; Liu, Jun

    2016-06-01

    The simulation of rainfall-runoff process is essential for disaster emergency and sustainable development. One common disadvantage of the existing conceptual hydrological models is that they are highly dependent upon specific spatial-temporal contexts. Meanwhile, due to the inter-dependence of adjacent flow paths, it is still difficult for the RS or GIS supported distributed hydrological models to achieve high-performance application in real world applications. As an attempt to improve the performance efficiencies of those models, this study presents a high-performance rainfall-runoff simulating framework based on the flow path network and a separate particle system. The vector-based flow path lines are topologically linked to constrain the movements of independent rain drop particles. A separate particle system, representing surface runoff, is involved to model the precipitation process and simulate surface flow dynamics. The trajectory of each particle is constrained by the flow path network and can be tracked by concurrent processors in a parallel cluster system. The result of speedup experiment shows that the proposed framework can significantly improve the simulating performance just by adding independent processors. By separating the catchment elements and the accumulated water, this study provides an extensible solution for improving the existing distributed hydrological models. Further, a parallel modeling and simulating platform needs to be developed and validate to be applied in monitoring real world hydrologic processes.

  3. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  4. Coal-fired high performance power generating system. Quarterly progress report, April 1--June 30, 1993

    SciTech Connect

    Not Available

    1993-11-01

    This report covers work carried out under Task 2, Concept Definition and Analysis, Task 3, Preliminary R&D and Task 4, Commercial Generating Plant Design, under Contract AC22-92PC91155, ``Engineering Development of a Coal Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: >47% thermal efficiency; NO{sub x}, SO{sub x} and Particulates {le}25% NSPS; cost {ge}65% of heat input; all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. A survey of currently available high temperature alloys has been completed and some of their high temperature properties are shown for comparison. Several of the most promising candidates will be selected for testing to determine corrosion resistance and high temperature strength. The corrosion resistance testing of candidate refractory coatings is continuing and some of the recent results are presented. This effort will provide important design information that will ultimately establish the operating ranges of the HITAF.

  5. Towards Building High Performance Medical Image Management System for Clinical Trials.

    PubMed

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-01-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTful Web Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems. PMID:21603096

  6. Microdialysis based monitoring of subcutaneous interstitial and venous blood glucose in Type 1 diabetic subjects by mid-infrared spectrometry for intensive insulin therapy

    NASA Astrophysics Data System (ADS)

    Heise, H. Michael; Kondepati, Venkata Radhakrishna; Damm, Uwe; Licht, Michael; Feichtner, Franz; Mader, Julia Katharina; Ellmerer, Martin

    2008-02-01

    Implementing strict glycemic control can reduce the risk of serious complications in both diabetic and critically ill patients. For this purpose, many different blood glucose monitoring techniques and insulin infusion strategies have been tested towards the realization of an artificial pancreas under closed loop control. In contrast to competing subcutaneously implanted electrochemical biosensors, microdialysis based systems for sampling body fluids from either the interstitial adipose tissue compartment or from venous blood have been developed, which allow an ex-vivo glucose monitoring by mid-infrared spectrometry. For the first option, a commercially available, subcutaneously inserted CMA 60 microdialysis catheter has been used routinely. The vascular body interface includes a double-lumen venous catheter in combination with whole blood dilution using a heparin solution. The diluted whole blood is transported to a flow-through dialysis cell, where the harvesting of analytes across the microdialysis membrane takes place at high recovery rates. The dialysate is continuously transported to the IR-sensor. Ex-vivo measurements were conducted on type-1 diabetic subjects lasting up to 28 hours. Experiments have shown excellent agreement between the sensor readout and the reference blood glucose concentration values. The simultaneous assessment of dialysis recovery rates renders a reliable quantification of whole blood concentrations of glucose and metabolites (urea, lactate etc) after taking blood dilution into account. Our results from transmission spectrometry indicate, that the developed bed-side device enables reliable long-term glucose monitoring with reagent- and calibration-free operation.

  7. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  8. Extending PowerPack for Profiling and Analysis of High Performance Accelerator-Based Systems

    SciTech Connect

    Li, Bo; Chang, Hung-Ching; Song, Shuaiwen; Su, Chun-Yi; Meyer, Timmy; Mooring, John; Cameron, Kirk

    2014-12-01

    Accelerators offer a substantial increase in efficiency for high-performance systems offering speedups for computational applications that leverage hardware support for highly-parallel codes. However, the power use of some accelerators exceeds 200 watts at idle which means use at exascale comes at a significant increase in power at a time when we face a power ceiling of about 20 megawatts. Despite the growing domination of accelerator-based systems in the Top500 and Green500 lists of fastest and most efficient supercomputers, there are few detailed studies comparing the power and energy use of common accelerators. In this work, we conduct detailed experimental studies of the power usage and distribution of Xeon-Phi-based systems in comparison to the NVIDIA Tesla and at SandyBridge.

  9. An Empirical Examination of the Mechanisms Mediating between High-Performance Work Systems and the Performance of Japanese Organizations

    ERIC Educational Resources Information Center

    Takeuchi, Riki; Lepak, David P.; Wang, Heli; Takeuchi, Kazuo

    2007-01-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human…

  10. The design of a high performance dataflow processor for multiprocessor systems

    SciTech Connect

    Luc, K.Q.

    1989-01-01

    The objective of this work is to design a high performance dynamic dataflow processor for multiprocessor systems. The performance of contemporary dataflow processors is limited due to the presence of a component, called a matching unit. The function of this unit is to match instruction tokens in order to detect the executability of instructions. Since activities within the matching unit are sequential in nature and require multiple memory accesses, the unit has been identified as a major performance bottleneck in a prototype processor. The author proposes a natural way to partition the set of tokens and present a new implementation for the matching unit, called an Instance-Based Matching Unit. The new unit requires tokens to be partitioned into blocks and allows matching of these blocks of tokens to proceed concurrently. With the new matching unit, substantial throughput enhancement for the unit is reported. He then analyzes the throughputs at various stages of a conventional dataflow processor. The results thus obtained direct us to propose an optimum configuration for an effective sub-processor. The maximum throughput of this sub-processor is determined by the throughput of a queue. With the sub-processor as a building block, a high performance dataflow processor is presented which consists of multiple copies of the sub-processor. Characteristics of the processor are studied with the Livermore Fortran Kernels as inputs. The performance of this processor is high, and the performance increases with the number of Sub-Processors.

  11. High-reliability high-performance optical data storage system architecture

    NASA Astrophysics Data System (ADS)

    Jin, Hai; Cheng, Peng; Feng, Dan; Zhou, Xinrong

    1998-08-01

    With the terabyte demands of storage in many applications, the improvement of the speed of optical disk, especially the write performance will definitely extend the scope of their applications and enhance the overall performance of computer system. One effective way to improve the speed is to use a plurality of optical disk drivers together to construct an optical storage array similar to Redundant Arrays of Independent Disks (RAID). According to the typical architecture of RAID, the most common fault tolerant RAID architecture is RAID level 1 or RAID level 5. Both are not suitable for optical storage array because RAID level 1 architecture has the most redundancy, while the write performance of RAID level 5 architecture is one-fourth of that of RAID level 0 architecture especially for the small- write problem. In this paper, we propose a high performance and high reliability optical disk array architecture with less redundancy, called Mirror Striped Disk Array (MSDA). It is a novel solution to a small write problem for disk array. MSDA stores the original data in two ways, one in a single optical disk and the other in a plurality of optical disks in the way of RAID level 0. The redundancy of whole system is less than RAID level 1 architecture but with the same reliability as RAID level 5. As the performance of RAID level 0 part of optical storage system is much higher than that of RAID level 5 in ordinary disk array, thus it avoids the write performance loss when using Mirror Striped Disk Array architecture. Because it omits the parity generation procedure when writing the new data, thus the overall performance of Mirror Striped Disk Array is the same as that of RAID level 0 architecture. Using this architecture, we can achieve the high reliability and high performance optical storage system without adding any extra redundancy and without losing any performance compared with RAID level 0 architecture but with the reliability much higher than that of RAID level 5.

  12. Management of Virtual Large-scale High-performance Computing Systems

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Scott, Stephen L

    2011-01-01

    Linux is widely used on high-performance computing (HPC) systems, from commodity clusters to Cray su- percomputers (which run the Cray Linux Environment). These platforms primarily differ in their system config- uration: some only use SSH to access compute nodes, whereas others employ full resource management sys- tems (e.g., Torque and ALPS on Cray XT systems). Furthermore, latest improvements in system-level virtualization techniques, such as hardware support, virtual machine migration for system resilience purposes, and reduction of virtualization overheads, enables the usage of virtual machines on HPC platforms. Currently, tools for the management of virtual machines in the context of HPC systems are still quite basic, and often tightly coupled to the target platform. In this docu- ment, we present a new system tool for the management of virtual machines in the context of large-scale HPC systems, including a run-time system and the support for all major virtualization solutions. The proposed solution is based on two key aspects. First, Virtual System Envi- ronments (VSE), introduced in a previous study, provide a flexible method to define the software environment that will be used within virtual machines. Secondly, we propose a new system run-time for the management and deployment of VSEs on HPC systems, which supports a wide range of system configurations. For instance, this generic run-time can interact with resource managers such as Torque for the management of virtual machines. Finally, the proposed solution provides appropriate ab- stractions to enable use with a variety of virtualization solutions on different Linux HPC platforms, to include Xen, KVM and the HPC oriented Palacios.

  13. A survey on resource allocation in high performance distributed computing systems

    SciTech Connect

    Hussain, Hameed; Malik, Saif Ur Rehman; Hameed, Abdul; Khan, Samee Ullah; Bickler, Gage; Min-Allah, Nasro; Qureshi, Muhammad Bilal; Zhang, Limin; Yongji, Wang; Ghani, Nasir; Kolodziej, Joanna; Zomaya, Albert Y.; Xu, Cheng-Zhong; Balaji, Pavan; Vishnu, Abhinav; Pinel, Fredric; Pecero, Johnatan E.; Kliazovich, Dzmitry; Bouvry, Pascal; Li, Hongxiang; Wang, Lizhe; Chen, Dan; Rayes, Ammar

    2013-11-01

    An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.

  14. State observers and Kalman filtering for high performance vibration isolation systems

    SciTech Connect

    Beker, M. G. Bertolini, A.; Hennes, E.; Rabeling, D. S.; Brand, J. F. J. van den; Bulten, H. J.

    2014-03-15

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system.

  15. State observers and Kalman filtering for high performance vibration isolation systems.

    PubMed

    Beker, M G; Bertolini, A; van den Brand, J F J; Bulten, H J; Hennes, E; Rabeling, D S

    2014-03-01

    There is a strong scientific case for the study of gravitational waves at or below the lower end of current detection bands. To take advantage of this scientific benefit, future generations of ground based gravitational wave detectors will need to expand the limit of their detection bands towards lower frequencies. Seismic motion presents a major challenge at these frequencies and vibration isolation systems will play a crucial role in achieving the desired low-frequency sensitivity. A compact vibration isolation system designed to isolate in-vacuum optical benches for Advanced Virgo will be introduced and measurements on this system are used to present its performance. All high performance isolation systems employ an active feedback control system to reduce the residual motion of their suspended payloads. The development of novel control schemes is needed to improve the performance beyond what is currently feasible. Here, we present a multi-channel feedback approach that is novel to the field. It utilizes a linear quadratic regulator in combination with a Kalman state observer and is shown to provide effective suppression of residual motion of the suspended payload. The application of state observer based feedback control for vibration isolation will be demonstrated with measurement results from the Advanced Virgo optical bench suspension system. PMID:24689604

  16. A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles

    NASA Astrophysics Data System (ADS)

    Zhai, Yiwen; Zhang, Hui; Zhang, Lingling; Dong, Shaojun

    2016-05-01

    A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to Prussian white nanoparticles leading to a decrease of the absorption spectrum, was chosen as the target. And we were able to determine the concentration of sulfite in aqueous solution with a low detection limit and a broad linear relationship.A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb3+,Er3+,Tm3+ upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to

  17. Multisensory systems integration for high-performance motor control in flies.

    PubMed

    Frye, Mark A

    2010-06-01

    Engineered tracking systems 'fuse' data from disparate sensor platforms, such as radar and video, to synthesize information that is more reliable than any single input. The mammalian brain registers visual and auditory inputs to directionally localize an interesting environmental feature. For a fly, sensory perception is challenged by the extreme performance demands of high speed flight. Yet even a fruit fly can robustly track a fragmented odor plume through varying visual environments, outperforming any human engineered robot. Flies integrate disparate modalities, such as vision and olfaction, which are neither related by spatiotemporal spectra nor processed by registered neural tissue maps. Thus, the fly is motivating new conceptual frameworks for how low-level multisensory circuits and functional algorithms produce high-performance motor control. PMID:20202821

  18. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME. PMID:24514859

  19. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  20. Users matter : multi-agent systems model of high performance computing cluster users.

    SciTech Connect

    North, M. J.; Hood, C. S.; Decision and Information Sciences; IIT

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex due to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.

  1. Determination of metabolites of cytochrome P-450 model systems using high-performance liquid chromatography.

    PubMed

    Esclade, L; Guillochon, D; Thomas, D

    1985-06-14

    High-performance liquid chromatographic techniques were developed for the simultaneous detection of metabolites in a cytochrome P-450 model system composed of NADH, haemoglobin and methylene blue. Monohydroxylated metabolites were determined following aniline, acetanilide and phenol hydroxylations. 4-Aminoantipyrine, 7-hydroxycoumarin and p-nitrophenol were determined after dealkylation of 4-N,N-dimethylamino-antipyrine, 7-ethoxycoumarin and p-nitroanisole. These substrates are commonly used for measuring cytochrome P-450 activities. Treatment of the samples was minimal, consisting of a simple deproteinization, and did not involve any organic extraction. Separations were carried out on reversed-phase columns and the products were detected by UV adsorption. Separations were completed in less than 15 min and the detection limits were between 0.5 and 4 microM. PMID:3875625

  2. Scientific Data Services -- A High-Performance I/O System with Array Semantics

    SciTech Connect

    Wu, Kesheng; Byna, Surendra; Rotem, Doron; Shoshani, Arie

    2011-09-21

    As high-performance computing approaches exascale, the existing I/O system design is having trouble keeping pace in both performance and scalability. We propose to address this challenge by adopting database principles and techniques in parallel I/O systems. First, we propose to adopt an array data model because many scientific applications represent their data in arrays. This strategy follows a cardinal principle from database research, which separates the logical view from the physical layout of data. This high-level data model gives the underlying implementation more freedom to optimize the physical layout and to choose the most effective way of accessing the data. For example, knowing that a set of write operations is working on a single multi-dimensional array makes it possible to keep the subarrays in a log structure during the write operations and reassemble them later into another physical layout as resources permit. While maintaining the high-level view, the storage system could compress the user data to reduce the physical storage requirement, collocate data records that are frequently used together, or replicate data to increase availability and fault-tolerance. Additionally, the system could generate secondary data structures such as database indexes and summary statistics. We expect the proposed Scientific Data Services approach to create a “live” storage system that dynamically adjusts to user demands and evolves with the massively parallel storage hardware.

  3. High-performance and stability reticle writing system HL-800M

    NASA Astrophysics Data System (ADS)

    Kadowaki, Yasuhiro; Kawasaki, Katsuhiro; Mizuno, Kazui; Satoh, Hidetoshi; Hoga, Morihisa; Uryu, Ken

    1998-09-01

    HL-800M has been developed as electron beam reticle writing system (EB) for advanced reticle production. It is very important for EB to keep high performance constantly in the actual advanced reticle production. To meet such a requirement, this system adopts accelerated voltage of 50kV, variable shaped beam, continuous moving stage and 3-stage deflector. Especially, to improve the positioning accuracy, this system has temperature control system, active vibration-isolation system and the new software for position error correction. The proximity effect correction which changes exposure shot time depending on the pattern density and the multi-exposure function are also installed. As a result, the positing accuracy of 32nm and the long term placement of 28 nm are obtained. The line-width linearity from 1 micrometers to 10 micrometers is within the range of 70 nm, and 40 nm form 1 micrometers to 3 micrometers . The stitching accuracy at the stripe boundary is 26nm, and 20nm in case of the 3-path exposure.

  4. Engineering development of coal-fired high-performance power systems. Technical report, July - September 1996

    SciTech Connect

    1996-11-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, AlliedSignal Aerospace Equipment Systems, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase I of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). It is a pulverized fuel-fired boiler/airheater where steam and gas turbine air are indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and then a pilot plant with integrated pyrolyzer and char combustion systems will be tested. In this report, progress in the pyrolyzer pilot plant preparation is reported. The results of extensive laboratory and bench scale testing of representative char are also reported. Preliminary results of combustion modeling of the char combustion system are included. There are also discussions of the auxiliary systems that are planned for the char combustion system pilot plant and the status of the integrated system pilot plant.

  5. An empirical examination of the mechanisms mediating between high-performance work systems and the performance of Japanese organizations.

    PubMed

    Takeuchi, Riki; Lepak, David P; Wang, Heli; Takeuchi, Kazuo

    2007-07-01

    The resource-based view of the firm and social exchange perspectives are invoked to hypothesize linkages among high-performance work systems, collective human capital, the degree of social exchange in an establishment, and establishment performance. The authors argue that high-performance work systems generate a high level of collective human capital and encourage a high degree of social exchange within an organization, and that these are positively related to the organization's overall performance. On the basis of a sample of Japanese establishments, the results provide support for the existence of these mediating mechanisms through which high-performance work systems affect overall establishment performance. PMID:17638466

  6. Development and implementation of a high-performance, cardiac-gated dual-energy imaging system

    NASA Astrophysics Data System (ADS)

    Shkumat, N. A.; Siewerdsen, J. H.; Dhanantwari, A. C.; Williams, D. B.; Richard, S.; Tward, D. J.; Paul, N. S.; Yorkston, J.; Van Metter, R.

    2007-03-01

    Mounting evidence suggests that the superposition of anatomical clutter in a projection radiograph poses a major impediment to the detectability of subtle lung nodules. Through decomposition of projections acquired at multiple kVp, dual-energy (DE) imaging offers to dramatically improve lung nodule detectability and, in part through quantitation of nodule calcification, increase specificity in nodule characterization. The development of a high-performance DE chest imaging system is reported, with design and implementation guided by fundamental imaging performance metrics. A diagnostic chest stand (Kodak RVG 5100 digital radiography system) provided the basic platform, modified to include: (i) a filter wheel, (ii) a flat-panel detector (Trixell Pixium 4600), (iii) a computer control and monitoring system for cardiac-gated acquisition, and (iv) DE image decomposition and display. Computational and experimental studies of imaging performance guided optimization of key acquisition technique parameters, including: x-ray filtration, allocation of dose between low- and high-energy projections, and kVp selection. A system for cardiac-gated acquisition was developed, directing x-ray exposures to within the quiescent period of the heart cycle, thereby minimizing anatomical misregistration. A research protocol including 200 patients imaged following lung nodule biopsy is underway, allowing preclinical evaluation of DE imaging performance relative to conventional radiography and low-dose CT.

  7. A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles.

    PubMed

    Zhai, Yiwen; Zhang, Hui; Zhang, Lingling; Dong, Shaojun

    2016-05-01

    A high performance fluorescence switching system triggered electrochemically by Prussian blue with upconversion nanoparticles was proposed. We synthesized a kind of hexagonal monodisperse β-NaYF4:Yb(3+),Er(3+),Tm(3+) upconversion nanoparticle and manipulated the intensity ratio of red emission (at 653 nm) and green emission at (523 and 541 nm) around 2 : 1, in order to match well with the absorption spectrum of Prussian blue. Based on the efficient fluorescence resonance energy transfer and inner-filter effect of the as-synthesized upconversion nanoparticles and Prussian blue, the present fluorescence switching system shows obvious behavior with high fluorescence contrast and good stability. To further extend the application of this system in analysis, sulfite, a kind of important anion in environmental and physiological systems, which could also reduce Prussian blue to Prussian white nanoparticles leading to a decrease of the absorption spectrum, was chosen as the target. And we were able to determine the concentration of sulfite in aqueous solution with a low detection limit and a broad linear relationship. PMID:27102984

  8. A High Performance Computing Network and System Simulator for the Power Grid: NGNS^2

    SciTech Connect

    Villa, Oreste; Tumeo, Antonino; Ciraci, Selim; Daily, Jeffrey A.; Fuller, Jason C.

    2012-11-11

    Designing and planing next generation power grid sys- tems composed of large power distribution networks, monitoring and control networks, autonomous generators and consumers of power requires advanced simulation infrastructures. The objective is to predict and analyze in time the behavior of networks of systems for unexpected events such as loss of connectivity, malicious attacks and power loss scenarios. This ultimately allows one to answer questions such as: “What could happen to the power grid if ...”. We want to be able to answer as many questions as possible in the shortest possible time for the largest possible systems. In this paper we present a new High Performance Computing (HPC) oriented simulation infrastructure named Next Generation Network and System Simulator (NGNS2 ). NGNS2 allows for the distribution of a single simulation among multiple computing elements by using MPI and OpenMP threads. NGNS2 provides extensive configuration, fault tolerant and load balancing capabilities needed to simulate large and dynamic systems for long periods of time. We show the preliminary results of the simulator running approximately two million simulated entities both on a 64-node commodity Infiniband cluster and a 48-core SMP workstation.

  9. Towards a smart Holter system with high performance analogue front-end and enhanced digital processing.

    PubMed

    Du, Leilei; Yan, Yan; Wu, Wenxian; Mei, Qiujun; Luo, Yu; Li, Yang; Wang, Lei

    2013-01-01

    Multiple-lead dynamic ECG recorders (Holter) play an important role in the earlier detection of various cardiovascular diseases. In this paper, we present the first several steps towards a 12-lead Holter system with high-performance AFE (Analogue Front-End) and enhanced digital processing. The system incorporates an analogue front-end chip (ADS1298 from TI), which has not yet been widely used in most commercial Holter products. A highly-efficient data management module was designated to handle the data exchange between the ADS1298 and the microprocessor (STM32L151 from ST electronics). Furthermore, the system employs a Field Programmable Gate Array (Spartan-3E from Xilinx) module, on which a dedicated real-time 227-step FIR filter was executed to improve the overall filtering performance, since the ADS1298 has no high-pass filtering capability and only allows limited low-pass filtering. The Spartan-3E FPGA is also capable of offering further on-board computational ability for a smarter Holter. The results indicate that all functional blocks work as intended. In the future, we will conduct clinical trials and compare our system with other state-of-the-arts. PMID:24109911

  10. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  11. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  12. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  13. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  14. Guidelines for application of fluorescent lamps in high-performance avionic backlight systems

    NASA Astrophysics Data System (ADS)

    Syroid, Daniel D.

    1997-07-01

    Fluorescent lamps have proven to be well suited for use in high performance avionic backlight systems as demonstrated by numerous production applications for both commercial and military cockpit displays. Cockpit display applications include: Boeing 777, new 737s, F-15, F-16, F-18, F-22, C- 130, Navy P3, NASA Space Shuttle and many others. Fluorescent lamp based backlights provide high luminance, high lumen efficiency, precision chromaticity and long life for avionic active matrix liquid crystal display applications. Lamps have been produced in many sizes and shapes. Lamp diameters range from 2.6 mm to over 20 mm and lengths for the larger diameter lamps range to over one meter. Highly convoluted serpentine lamp configurations are common as are both hot and cold cathode electrode designs. This paper will review fluorescent lamp operating principles, discuss typical requirements for avionic grade lamps, compare avionic and laptop backlight designs and provide guidelines for the proper application of lamps and performance choices that must be made to attain optimum system performance considering high luminance output, system efficiency, dimming range and cost.

  15. A high performance hierarchical storage management system for the Canadian tier-1 centre at TRIUMF

    NASA Astrophysics Data System (ADS)

    Deatrich, D. C.; Liu, S. X.; Tafirout, R.

    2010-04-01

    We describe in this paper the design and implementation of Tapeguy, a high performance non-proprietary Hierarchical Storage Management (HSM) system which is interfaced to dCache for efficient tertiary storage operations. The system has been successfully implemented at the Canadian Tier-1 Centre at TRIUMF. The ATLAS experiment will collect a large amount of data (approximately 3.5 Petabytes each year). An efficient HSM system will play a crucial role in the success of the ATLAS Computing Model which is driven by intensive large-scale data analysis activities that will be performed on the Worldwide LHC Computing Grid infrastructure continuously. Tapeguy is Perl-based. It controls and manages data and tape libraries. Its architecture is scalable and includes Dataset Writing control, a Read-back Queuing mechanism and I/O tape drive load balancing as well as on-demand allocation of resources. A central MySQL database records metadata information for every file and transaction (for audit and performance evaluation), as well as an inventory of library elements. Tapeguy Dataset Writing was implemented to group files which are close in time and of similar type. Optional dataset path control dynamically allocates tape families and assign tapes to it. Tape flushing is based on various strategies: time, threshold or external callbacks mechanisms. Tapeguy Read-back Queuing reorders all read requests by using an elevator algorithm, avoiding unnecessary tape loading and unloading. Implementation of priorities will guarantee file delivery to all clients in a timely manner.

  16. A Low-Cost, High-Performance System for Fluorescence Lateral Flow Assays

    PubMed Central

    Lee, Linda G.; Nordman, Eric S.; Johnson, Martin D.; Oldham, Mark F.

    2013-01-01

    We demonstrate a fluorescence lateral flow system that has excellent sensitivity and wide dynamic range. The illumination system utilizes an LED, plastic lenses and plastic and colored glass filters for the excitation and emission light. Images are collected on an iPhone 4. Several fluorescent dyes with long Stokes shifts were evaluated for their signal and nonspecific binding in lateral flow. A wide range of values for the ratio of signal to nonspecific binding was found, from 50 for R-phycoerythrin (R-PE) to 0.15 for Brilliant Violet 605. The long Stokes shift of R-PE allowed the use of inexpensive plastic filters rather than costly interference filters to block the LED light. Fluorescence detection with R-PE and absorbance detection with colloidal gold were directly compared in lateral flow using biotinylated bovine serum albumen (BSA) as the analyte. Fluorescence provided linear data over a range of 0.4–4,000 ng/mL with a 1,000-fold signal change while colloidal gold provided non-linear data over a range of 16–4,000 ng/mL with a 10-fold signal change. A comparison using human chorionic gonadotropin (hCG) as the analyte showed a similar advantage in the fluorescent system. We believe our inexpensive yet high-performance platform will be useful for providing quantitative and sensitive detection in a point-of-care setting. PMID:25586412

  17. Reconfigurable and adaptive photonic networks for high-performance computing systems.

    PubMed

    Kodi, Avinash; Louri, Ahmed

    2009-08-01

    As feature sizes decrease to the submicrometer regime and clock rates increase to the multigigahertz range, the limited bandwidth at higher bit rates and longer communication distances in electrical interconnects will create a major bandwidth imbalance in future high-performance computing (HPC) systems. We explore the application of an optoelectronic interconnect for the design of flexible, high-bandwidth, reconfigurable and adaptive interconnection architectures for chip-to-chip and board-to-board HPC systems. Reconfigurability is realized by interconnecting arrays of optical transmitters, and adaptivity is implemented by a dynamic bandwidth reallocation (DBR) technique that balances the load on each communication channel. We evaluate a DBR technique, the lockstep (LS) protocol, that monitors traffic intensities, reallocates bandwidth, and adapts to changes in communication patterns. We incorporate this DBR technique into a detailed discrete-event network simulator to evaluate the performance for uniform, nonuniform, and permutation communication patterns. Simulation results indicate that, without reconfiguration techniques being applied, optical based system architecture shows better performance than electrical interconnects for uniform and nonuniform patterns; with reconfiguration techniques being applied, the dynamically reconfigurable optoelectronic interconnect provides much better performance for all communication patterns. Based on the performance study, the reconfigured architecture shows 30%-50% increased throughput and 50%-75% reduced network latency compared with HPC electrical networks. PMID:19649024

  18. IGUANA: a high-performance 2D and 3D visualisation system

    NASA Astrophysics Data System (ADS)

    Alverson, G.; Eulisse, G.; Muzaffar, S.; Osborne, I.; Taylor, L.; Tuura, L. A.

    2004-11-01

    The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort. IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works. We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.

  19. A High Performance Pocket-Size System for Evaluations in Acoustic Signal Processing

    NASA Astrophysics Data System (ADS)

    Rass, Uwe; Steeger, Gerhard H.

    2001-12-01

    Custom-made hardware is attractive for sophisticated signal processing in wearable electroacoustic devices, but has a high initial cost overhead. Thus, signal processing algorithms should be tested thoroughly in real application environments by potential end users prior to the hardware implementation. In addition, the algorithms should be easily alterable during this test phase. A wearable system which meets these requirements has been developed and built. The system is based on the high performance signal processor Motorola DSP56309. This device also includes high quality stereo analog-to-digital-(ADC)- and digital-to-analog-(DAC)-converters with 20 bit word length each. The available dynamic range exceeds 88 dB. The input and output gains can be adjusted by digitally controlled potentiometers. The housing of the unit is small enough to carry it in a pocket (dimensions 150 × 80 × 25 mm). Software tools have been developed to ease the development of new algorithms. A set of configurable Assembler code modules implements all hardware dependent software routines and gives easy access to the peripherals and interfaces. A comfortable fitting interface allows easy control of the signal processing unit from a PC, even by assistant personnel. The device has proven to be a helpful means for development and field evaluations of advanced new hearing aid algorithms, within interdisciplinary research projects. Now it is offered to the scientific community.

  20. Engineering Development of Coal-Fired High-Performance Power Systems

    SciTech Connect

    York Tsuo

    2000-12-31

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately. This report addresses the areas of technical progress for this quarter. The detail of syngas cooler design is given in this report. The final construction work of the CFB pyrolyzer pilot plant has started during this quarter. No experimental testing was performed during this quarter. The proposed test matrix for the future CFB pyrolyzer tests is given in this report. Besides testing various fuels, bed temperature will be the primary test parameter.

  1. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    NASA Technical Reports Server (NTRS)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  2. Engineering development of coal-fired high-performance power systems

    SciTech Connect

    1999-10-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, AL. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. Analysis of the arch-fired burner continued during this quarter. Unburned carbon and NOx performance are included in this report. Construction commenced this quarter to modify the CETF

  3. Data acquisition and control system for high-performance large-area CCD systems

    NASA Astrophysics Data System (ADS)

    Afanasieva, I. V.

    2015-04-01

    Astronomical CCD systems based on second-generation DINACON controllers were developed at the SAO RAS Advanced Design Laboratory more than seven years ago and since then have been in constant operation at the 6-meter and Zeiss-1000 telescopes. Such systems use monolithic large-area CCDs. We describe the software developed for the control of a family of large-area CCD systems equipped with a DINACON-II controller. The software suite serves for acquisition, primary reduction, visualization, and storage of video data, and also for the control, setup, and diagnostics of the CCD system.

  4. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-04-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%, NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input, all solid wastes benign, and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  5. Engineering development of coal-fired high performance power systems phase 2 and 3

    SciTech Connect

    Unknown

    1999-08-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le}10% NSPS (New Source Performance Standard); coal providing {ge} 65% of heat input; all solid wastes benign; and cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.2 HITAF Air Heaters; and Task 2.4 Duct Heater and Gas Turbine Integration.

  6. Design of a VLSI scan conversion processor for high-performance 3-D graphics systems

    SciTech Connect

    Huang, H.U.

    1988-01-01

    Scan-conversion processing is the bottleneck in the image generation process. To solve the problem of smooth shading and hidden surface elimination, a new processor architecture was invented which has been labeled as a scan-conversion processor architecture (SCP). The SCP is designed to perform hidden surface elimination and scan conversion for 64 pixels. The color intensities are dual-buffered so that when one buffer is being updated the other can be scanned out. Z-depth is used to perform the hidden surface elimination. The key operation performed by the SCP is the evaluation of linear functions of a form like F(X,Y) = A X + B Y + C. The computation is further simplified by using incremental addition. The z-depth buffer and the color buffers are incorporated onto the same chip. The SCP receives from its preprocessor the information for the definition of polygons and the computation of z-depth and RGB color intensities. Many copies of this processor will be used in a high-performance graphics system.

  7. Analysis of starch in food systems by high-performance size exclusion chromatography.

    PubMed

    Ovando-Martínez, Maribel; Whitney, Kristin; Simsek, Senay

    2013-02-01

    Starch has unique physicochemical characteristics among food carbohydrates. Starch contributes to the physicochemical attributes of food products made from roots, legumes, cereals, and fruits. It occurs naturally as distinct particles, called granules. Most starch granules are a mixture of 2 sugar polymers: a highly branched polysaccharide named amylopectin and a basically linear polysaccharide named amylose. The starch contained in food products undergoes changes during processing, which causes changes in the starch molecular weight and amylose to amylopectin ratio. The objective of this study was to develop a new, simple, 1-step, and accurate method for simultaneous determination of amylose and amylopectin ratio as well as weight-averaged molecular weights of starch in food products. Starch from bread flour, canned peas, corn flake cereal, snack crackers, canned kidney beans, pasta, potato chips, and white bread was extracted by dissolving in KOH, urea, and precipitation with ethanol. Starch samples were solubilized and analyzed on a high-performance size exclusion chromatography (HPSEC) system. To verify the identity of the peaks, fractions were collected and soluble starch and beta-glucan assays were performed additional to gas chromatography analysis. We found that all the fractions contain only glucose and soluble starch assay is correlated to the HPSEC fractionation. This new method can be used to determine amylose amylopectin ratio and weight-averaged molecular weight of starch from various food products using as low as 25 mg dry samples. PMID:23330715

  8. Engineering development of coal-fired high performance power systems, Phase II and III

    SciTech Connect

    1999-01-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) that is capable of: thermal efficiency (HHV) {ge} 47%; NOx, SOx, and particulates {le} 10% NSPS (New Source Performance Standard) coal providing {ge} 65% of heat input; all solid wastes benign; cost of electricity {le} 90% of present plants. Phase 1, which began in 1992, focused on the analysis of various configurations of indirectly fired cycles and on technical assessments of alternative plant subsystems and components, including performance requirements, developmental status, design options, complexity and reliability, and capital and operating costs. Phase 1 also included preliminary R and D and the preparation of designs for HIPPS commercial plants approximately 300 MWe in size. This phase, Phase 2, involves the development and testing of plant subsystems, refinement and updating of the HIPPS commercial plant design, and the site selection and engineering design of a HIPPS prototype plant. Work reported herein is from: Task 2.1 HITAC Combustors; Task 2.2 HITAF Air Heaters; Task 6 HIPPS Commercial Plant Design Update.

  9. High-performance CMOS image sensors at BAE SYSTEMS Imaging Solutions

    NASA Astrophysics Data System (ADS)

    Vu, Paul; Fowler, Boyd; Liu, Chiao; Mims, Steve; Balicki, Janusz; Bartkovjak, Peter; Do, Hung; Li, Wang

    2012-07-01

    In this paper, we present an overview of high-performance CMOS image sensor products developed at BAE SYSTEMS Imaging Solutions designed to satisfy the increasingly challenging technical requirements for image sensors used in advanced scientific, industrial, and low light imaging applications. We discuss the design and present the test results of a family of image sensors tailored for high imaging performance and capable of delivering sub-electron readout noise, high dynamic range, low power, high frame rates, and high sensitivity. We briefly review the performance of the CIS2051, a 5.5-Mpixel image sensor, which represents our first commercial CMOS image sensor product that demonstrates the potential of our technology, then we present the performance characteristics of the CIS1021, a full HD format CMOS image sensor capable of delivering sub-electron read noise performance at 50 fps frame rate at full HD resolution. We also review the performance of the CIS1042, a 4-Mpixel image sensor which offers better than 70% QE @ 600nm combined with better than 91dB intra scene dynamic range and about 1 e- read noise at 100 fps frame rate at full resolution.

  10. Pyrolytic carbon-coated stainless steel felt as a high-performance anode for bioelectrochemical systems.

    PubMed

    Guo, Kun; Hidalgo, Diana; Tommasi, Tonia; Rabaey, Korneel

    2016-07-01

    Scale up of bioelectrochemical systems (BESs) requires highly conductive, biocompatible and stable electrodes. Here we present pyrolytic carbon-coated stainless steel felt (C-SS felt) as a high-performance and scalable anode. The electrode is created by generating a carbon layer on stainless steel felt (SS felt) via a multi-step deposition process involving α-d-glucose impregnation, caramelization, and pyrolysis. Physicochemical characterizations of the surface elucidate that a thin (20±5μm) and homogenous layer of polycrystalline graphitic carbon was obtained on SS felt surface after modification. The carbon coating significantly increases the biocompatibility, enabling robust electroactive biofilm formation. The C-SS felt electrodes reach current densities (jmax) of 3.65±0.14mA/cm(2) within 7days of operation, which is 11 times higher than plain SS felt electrodes (0.30±0.04mA/cm(2)). The excellent biocompatibility, high specific surface area, high conductivity, good mechanical strength, and low cost make C-SS felt a promising electrode for BESs. PMID:27058401

  11. HybridStore: A Cost-Efficient, High-Performance Storage System Combining SSDs and HDDs

    SciTech Connect

    Kim, Youngjae; Gupta, Aayush; Urgaonkar, Bhuvan; Piotr, Berman; Sivasubramaniam, Anand

    2011-01-01

    Unlike the use of DRAM for caching or buffering, certain idiosyncrasies of NAND Flash-based solid-state drives (SSDs) make their integration into existing systems non-trivial. Flash memory suffers from limits on its reliability, is an order of magnitude more expensive than the magnetic hard disk drives (HDDs), and can sometimes be as slow as the HDD (due to excessive garbage collection (GC) induced by high intensity of random writes). Given these trade-offs between HDDs and SSDs in terms of cost, performance, and lifetime, the current consensus among several storage experts is to view SSDs not as a replacement for HDD but rather as a complementary device within the high-performance storage hierarchy. We design and evaluate such a hybrid system called HybridStore to provide: (a) HybridPlan: improved capacity planning technique to administrators with the overall goal of operating within cost-budgets and (b) HybridDyn: improved performance/lifetime guarantees during episodes of deviations from expected workloads through two novel mechanisms: write-regulation and fragmentation busting. As an illustrative example of HybridStore s ef cacy, HybridPlan is able to nd the most cost-effective storage con guration for a large scale workload of Microsoft Research and suggest one MLC SSD with ten 7.2K RPM HDDs instead of fourteen 7.2K RPM HDDs only. HybridDyn is able to reduce the average response time for an enterprise scale random-write dominant workload by about 71% as compared to a HDD-based system.

  12. Advanced Insulation for High Performance Cost-Effective Wall, Roof, and Foundation Systems Final Report

    SciTech Connect

    Costeux, Stephane; Bunker, Shanon

    2013-12-20

    The objective of this project was to explore and potentially develop high performing insulation with increased R/inch and low impact on climate change that would help design highly insulating building envelope systems with more durable performance and lower overall system cost than envelopes with equivalent performance made with materials available today. The proposed technical approach relied on insulation foams with nanoscale pores (about 100 nm in size) in which heat transfer will be decreased. Through the development of new foaming methods, of new polymer formulations and new analytical techniques, and by advancing the understanding of how cells nucleate, expand and stabilize at the nanoscale, Dow successfully invented and developed methods to produce foams with 100 nm cells and 80% porosity by batch foaming at the laboratory scale. Measurements of the gas conductivity on small nanofoam specimen confirmed quantitatively the benefit of nanoscale cells (Knudsen effect) to increase insulation value, which was the key technical hypotheses of the program. In order to bring this technology closer to a viable semi-continuous/continuous process, the project team modified an existing continuous extrusion foaming process as well as designed and built a custom system to produce 6" x 6" foam panels. Dow demonstrated for the first time that nanofoams can be produced in a both processes. However, due to technical delays, foam characteristics achieved so far fall short of the 100 nm target set for optimal insulation foams. In parallel with the technology development, effort was directed to the determination of most promising applications for nanocellular insulation foam. Voice of Customer (VOC) exercise confirmed that demand for high-R value product will rise due to building code increased requirements in the near future, but that acceptance for novel products by building industry may be slow. Partnerships with green builders, initial launches in smaller markets (e.g. EIFS

  13. Engineering development of coal-fired high-performance power systems

    SciTech Connect

    1999-05-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolysis process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2 which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, AL. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. This report addresses the areas of technical progress for this quarter. The char combustion tests in the arch-fired arrangement were completed this quarter. A total of twenty-one setpoints were successfully completed, firing both synthetically-made char

  14. Coal-fired high performance power generating system. Quarterly progress report

    SciTech Connect

    Not Available

    1992-07-01

    The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of > 47% thermal efficiency; NO{sub x} SO {sub x} and Particulates < 25% NSPS; Cost of electricity 10% lower; coal > 65% of heat input and all solid wastes benign. In order to achieve these goals our team has outlined a research plan based on an optimized analysis of a 250 MW{sub e} combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (HITAF) which integrates several combustor and air heater designs with appropriate ash management procedures. Most of this report discusses the details of work on these components, and the R&D Plan for future work. The discussion of the combustor designs illustrates how detailed modeling can be an effective tool to estimate NO{sub x} production, minimum burnout lengths, combustion temperatures and even particulate impact on the combustor walls. When our model is applied to the long flame concept it indicates that fuel bound nitrogen will limit the range of coals that can use this approach. For high nitrogen coals a rapid mixing, rich-lean, deep staging combustor will be necessary. The air heater design has evolved into two segments: a convective heat exchanger downstream of the combustion process; a radiant panel heat exchanger, located in the combustor walls; The relative amount of heat transferred either radiatively or convectively will depend on the combustor type and the ash properties.

  15. High performance dash on warning air mobile, missile system. [intercontinental ballistic missiles - systems analysis

    NASA Technical Reports Server (NTRS)

    Levin, A. D.; Castellano, C. R.; Hague, D. S.

    1975-01-01

    An aircraft-missile system which performs a high acceleration takeoff followed by a supersonic dash to a 'safe' distance from the launch site is presented. Topics considered are: (1) technological feasibility to the dash on warning concept; (2) aircraft and boost trajectory requirements; and (3) partial cost estimates for a fleet of aircraft which provide 200 missiles on airborne alert. Various aircraft boost propulsion systems were studied such as an unstaged cryogenic rocket, an unstaged storable liquid, and a solid rocket staged system. Various wing planforms were also studied. Vehicle gross weights are given. The results indicate that the dash on warning concept will meet expected performance criteria, and can be implemented using existing technology, such as all-aluminum aircraft and existing high-bypass-ratio turbofan engines.

  16. ENGINEERING DEVELOPMENT OF COAL-FIRED HIGH-PERFORMANCE POWER SYSTEMS

    SciTech Connect

    1998-11-01

    A High Performance Power System (HIPPS) is being developed. This system is a coal-fired, combined cycle plant with indirect heating of gas turbine air. Foster Wheeler Development Corporation and a team consisting of Foster Wheeler Energy Corporation, Bechtel Corporation, University of Tennessee Space Institute and Westinghouse Electric Corporation are developing this system. In Phase 1 of the project, a conceptual design of a commercial plant was developed. Technical and economic analyses indicated that the plant would meet the goals of the project which include a 47 percent efficiency (HHV) and a 10 percent lower cost of electricity than an equivalent size PC plant. The concept uses a pyrolyzation process to convert coal into fuel gas and char. The char is fired in a High Temperature Advanced Furnace (HITAF). The HITAF is a pulverized fuel-fired boiler/air heater where steam is generated and gas turbine air is indirectly heated. The fuel gas generated in the pyrolyzer is then used to heat the gas turbine air further before it enters the gas turbine. The project is currently in Phase 2, which includes engineering analysis, laboratory testing and pilot plant testing. Research and development is being done on the HIPPS systems that are not commercial or being developed on other projects. Pilot plant testing of the pyrolyzer subsystem and the char combustion subsystem are being done separately, and after each experimental program has been completed, a larger scale pyrolyzer will be tested at the Power Systems Development Facility (PSDF) in Wilsonville, Al. The facility is equipped with a gas turbine and a topping combustor, and as such, will provide an opportunity to evaluate integrated pyrolyzer and turbine operation. The design of the char burner was completed during this quarter. The burner is designed for arch-firing and has a maximum capacity of 30 MMBtu/hr. This size represents a half scale version of a typical commercial burner. The burner is outfitted with

  17. High-Performance Optical 3R Regeneration for Scalable Fiber Transmission System Applications

    NASA Astrophysics Data System (ADS)

    Zhu, Zuqing; Funabashi, Masaki; Pan, Zhong; Paraschis, Loukas; Harris, David L.; Ben Yoo, S. J.

    2007-02-01

    This paper proposes and demonstrates optical 3R regeneration techniques for high-performance and scalable 10-Gb/s transmission systems. The 3R structures rely on monolithically integrated all-active semiconductor optical amplifier-based Mach Zehnder interferometers (SOA-MZIs) for signal reshaping and optical narrowband filtering using a Fabry Pérot filter (FPF) for all-optical clock recovery. The experimental results indicate very stable operation and superior cascadability of the proposed optical 3R structure, allowing error-free and low-penalty 10-Gb/s [pseudorandom bit sequence (PRBS) 223 - 1] return-to-zero (RZ) transmission through a record distance of 1 250 000 km using 10 000 optical 3R stages. Clock-enhancement techniques using a SOA-MZI are then proposed to accommodate the clock performance degradations that arise from dispersion uncompensated transmission. Leveraging such clock-enhancement techniques, we experimentally demonstrate error-free 125 000-km RZ dispersion uncompensated transmission at 10 Gb/s (PRBS 223 - 1) using 1000 stages of optical 3R regenerators spaced by 125-km large-effective-area fiber spans. To evaluate the proposed optical 3R structures in a relatively realistic environment and to investigate the tradeoff between the cascadability and the spacing of the optical 3R, a fiber recirculation loop is set up with 264- and 462-km deployed fiber. The field-trial experiment achieves error-free 10-Gb/s RZ transmission using PRBS 223} - 1 through 264 000-km deployed fiber across 1000 stages of optical 3R regenerators spaced by 264-km spans.

  18. A multi-layer robust adaptive fault tolerant control system for high performance aircraft

    NASA Astrophysics Data System (ADS)

    Huo, Ying

    Modern high-performance aircraft demand advanced fault-tolerant flight control strategies. Not only the control effector failures, but the aerodynamic type failures like wing-body damages often result in substantially deteriorate performance because of low available redundancy. As a result the remaining control actuators may yield substantially lower maneuvering capabilities which do not authorize the accomplishment of the air-craft's original specified mission. The problem is to solve the control reconfiguration on available control redundancies when the mission modification is urged to save the aircraft. The proposed robust adaptive fault-tolerant control (RAFTC) system consists of a multi-layer reconfigurable flight controller architecture. It contains three layers accounting for different types and levels of failures including sensor, actuator, and fuselage damages. In case of the nominal operation with possible minor failure(s) a standard adaptive controller stands to achieve the control allocation. This is referred to as the first layer, the controller layer. The performance adjustment is accounted for in the second layer, the reference layer, whose role is to adjust the reference model in the controller design with a degraded transit performance. The upmost mission adjust is in the third layer, the mission layer, when the original mission is not feasible with greatly restricted control capabilities. The modified mission is achieved through the optimization of the command signal which guarantees the boundedness of the closed-loop signals. The main distinguishing feature of this layer is the the mission decision property based on the current available resources. The contribution of the research is the multi-layer fault-tolerant architecture that can address the complete failure scenarios and their accommodations in realities. Moreover, the emphasis is on the mission design capabilities which may guarantee the stability of the aircraft with restricted post

  19. Instructional Leadership in Centralised Systems: Evidence from Greek High-Performing Secondary Schools

    ERIC Educational Resources Information Center

    Kaparou, Maria; Bush, Tony

    2015-01-01

    This paper examines the enactment of instructional leadership (IL) in high-performing secondary schools (HPSS), and the relationship between leadership and learning in raising student outcomes and encouraging teachers' professional learning in the highly centralised context of Greece. It reports part of a comparative research study focused on…

  20. Essential Elements of High Performing, High Quality Part C Systems. NECTAC Notes No. 25

    ERIC Educational Resources Information Center

    Lucas, Anne; Hurth, Joicey; Kasprzak, Christina

    2010-01-01

    National Early Childhood Technical Assistance Center (NECTAC) was asked to identify essential elements for supporting high performance and provision of high quality early intervention Part C services as determined by the Annual Performance Review (APR) required under Individuals with Disabilities Education Act (IDEA). To respond, NECTAC…

  1. High performance MRI simulations of motion on multi-GPU systems

    PubMed Central

    2014-01-01

    Background MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Methods Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Results Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. Conclusions MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer

  2. Silicon photonics-based laser system for high performance fiber sensing

    NASA Astrophysics Data System (ADS)

    Ayotte, S.; Faucher, D.; Babin, A.; Costin, F.; Latrasse, C.; Poulin, M.; G.-Deschênes, É.; Pelletier, F.; Laliberté, M.

    2015-09-01

    We present a compact four-laser source based on low-noise, high-bandwidth Pound-Drever-Hall method and optical phase-locked loops for sensing narrow spectral features. Four semiconductor external cavity lasers in butterfly packages are mounted on a shared electronics control board and all other optical functions are integrated on a single silicon photonics chip. This high performance source is compact, automated, robust, operates over a wide temperature range and remains locked for days. A laser to resonance frequency noise of 0.25 Hz/rt-Hz is demonstrated.

  3. Relationships of Cognitive and Metacognitive Learning Strategies to Mathematics Achievement in Four High-Performing East Asian Education Systems

    ERIC Educational Resources Information Center

    Areepattamannil, Shaljan; Caleon, Imelda S.

    2013-01-01

    The authors examined the relationships of cognitive (i.e., memorization and elaboration) and metacognitive learning strategies (i.e., control strategies) to mathematics achievement among 15-year-old students in 4 high-performing East Asian education systems: Shanghai-China, Hong Kong-China, Korea, and Singapore. In all 4 East Asian education…

  4. Constructing a LabVIEW-Controlled High-Performance Liquid Chromatography (HPLC) System: An Undergraduate Instrumental Methods Exercise

    ERIC Educational Resources Information Center

    Smith, Eugene T.; Hill, Marc

    2011-01-01

    In this laboratory exercise, students develop a LabVIEW-controlled high-performance liquid chromatography system utilizing a data acquisition device, two pumps, a detector, and fraction collector. The programming experience involves a variety of methods for interface communication, including serial control, analog-to-digital conversion, and…

  5. A High Resolution On-Chip Delay Sensor with Low Supply-Voltage Sensitivity for High-Performance Electronic Systems

    PubMed Central

    Sheng, Duo; Lai, Hsiu-Fan; Chan, Sheng-Min; Hong, Min-Rong

    2015-01-01

    An all-digital on-chip delay sensor (OCDS) circuit with high delay-measurement resolution and low supply-voltage sensitivity for efficient detection and diagnosis in high-performance electronic system applications is presented. Based on the proposed delay measurement scheme, the quantization resolution of the proposed OCDS can be reduced to several picoseconds. Additionally, the proposed cascade-stage delay measurement circuit can enhance immunity to supply-voltage variations of the delay measurement resolution without extra self-biasing or calibration circuits. Simulation results show that the delay measurement resolution can be improved to 1.2 ps; the average delay resolution variation is 0.55% with supply-voltage variations of ±10%. Moreover, the proposed delay sensor can be implemented in an all-digital manner, making it very suitable for high-performance electronic system applications as well as system-level integration. PMID:25688590

  6. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  7. HPTLC-aptastaining - Innovative protein detection system for high-performance thin-layer chromatography.

    PubMed

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-01-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations. PMID:27220270

  8. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    NASA Astrophysics Data System (ADS)

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-05-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.

  9. HPTLC-aptastaining – Innovative protein detection system for high-performance thin-layer chromatography

    PubMed Central

    Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha

    2016-01-01

    Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations. PMID:27220270

  10. Determination of the kinetic rate constant of cyclodextrin supramolecular systems by high-performance affinity chromatography.

    PubMed

    Zhang, Jiwen; Li, Haiyan; Sun, Lixin; Wang, Caifen

    2015-01-01

    The kinetics of the association and dissociation are fundamental kinetic processes for the host-guest interactions (such as the drug-target and drug-excipient interactions) and the in vivo performance of supramolecules. With advantages of rapid speed, high precision and ease of automation, the high-performance affinity chromatography (HPAC) is one of the best techniques to measure the interaction kinetics of weak to moderate affinities, such as the typical host-guest interactions of drug and cyclodextrins by using a cyclodextrin-immobilized column. The measurement involves the equilibration of the cyclodextrin column, the upload and elution of the samples (non-retained substances and retained solutes) at different flow rates on the cyclodextrin and control column, and data analysis. It has been indicated that cyclodextrin-immobilized chromatography is a cost-efficient high-throughput tool for the measurement of (small molecule) drug-cyclodextrin interactions as well as the dissociation of other supramolecules with relatively weak, fast, and extensive interactions. PMID:25749964

  11. Coal-fired high performance power generating system. Draft quarterly progress report, January 1--March 31, 1995

    SciTech Connect

    1995-10-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, ``Engineering Development of a Coal-Fired High Performance Power Generation System`` between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x} and particulates {le} 25% NSPS; cost {ge}65% of heat input; all solid wastes benign. A crucial aspect of the authors design is the integration of the gas turbine requirements with the HITAF output and steam cycle requirements. In order to take full advantage of modern highly efficient aeroderivative gas turbines they have carried out a large number of cycle calculations to optimize their commercial plant designs for both greenfield and repowering applications.

  12. Coal-fired high performance power generating system. Quarterly progress report, July 1, 1993--September 30, 1993

    SciTech Connect

    Not Available

    1993-12-31

    This report covers work carried out under Task 3, Preliminary Research and Development, and Task 4, Commercial Generating Plant Design, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of >47% thermal efficiency; NO{sub x}, SO{sub x}, and particulates {le} 25% NSPS; cost {ge} 65% of heat input; and all solid wastes benign. The report discusses progress in cycle analysis, chemical reactor modeling, ash deposition rate calculations for HITAF (high temperature advanced furnace) convective air heater, air heater materials, and deposit initiation and growth on ceramic substrates.

  13. High Performing Alabama School Systems: What Do the Best Have in Common?

    ERIC Educational Resources Information Center

    Miller-Whitehead, Marie

    The Alabama State Department of Education School System Report Card provides annual data for each of Alabama's city and county public school systems, including student achievement indicators on the Stanford Achievement Test, High School Exit exam, writing tests, ACT test, dropouts, ADA expenditures, free and reduced lunch, system revenues, and…

  14. Programmable partitioning for high-performance coherence domains in a multiprocessor system

    DOEpatents

    Blumrich, Matthias A.; Salapura, Valentina

    2011-01-25

    A multiprocessor computing system and a method of logically partitioning a multiprocessor computing system are disclosed. The multiprocessor computing system comprises a multitude of processing units, and a multitude of snoop units. Each of the processing units includes a local cache, and the snoop units are provided for supporting cache coherency in the multiprocessor system. Each of the snoop units is connected to a respective one of the processing units and to all of the other snoop units. The multiprocessor computing system further includes a partitioning system for using the snoop units to partition the multitude of processing units into a plurality of independent, memory-consistent, adjustable-size processing groups. Preferably, when the processor units are partitioned into these processing groups, the partitioning system also configures the snoop units to maintain cache coherency within each of said groups.

  15. Development of Nano-structured Electrode Materials for High Performance Energy Storage System

    NASA Astrophysics Data System (ADS)

    Huang, Zhendong

    Systematic studies have been done to develop a low cost, environmental-friendly facile fabrication process for the preparation of high performance nanostructured electrode materials and to fully understand the influence factors on the electrochemical performance in the application of lithium ion batteries (LIBs) or supercapacitors. For LIBs, LiNi1/3Co1/3Mn1/3O2 (NCM) with a 1D porous structure has been developed as cathode material. The tube-like 1D structure consists of inter-linked, multi-facet nanoparticles of approximately 100-500nm in diameter. The microscopically porous structure originates from the honeycomb-shaped precursor foaming gel, which serves as self-template during the stepwise calcination process. The 1D NCM presents specific capacities of 153, 140, 130 and 118mAh·g-1 at current densities of 0.1C, 0.5C, 1C and 2C, respectively. Subsequently, a novel stepwise crystallization process consisting of a higher crystallization temperature and longer period for grain growth is employed to prepare single crystal NCM nanoparticles. The modified sol-gel process followed by optimized crystallization process results in significant improvements in chemical and physical characteristics of the NCM particles. They include a fully-developed single crystal NCM with uniform composition and a porous NCM architecture with a reduced degree of fusion and a large specific surface area. The NCM cathode material with these structural modifications in turn presents significantly enhanced specific capacities of 173.9, 166.9, 158.3 and 142.3mAh·g -1 at 0.1C, 0.5C, 1C and 2C, respectively. Carbon nanotube (CNT) is used to improve the relative low power capability and poor cyclic stability of NCM caused by its poor electrical conductivity. The NCM/CNT nanocomposites cathodes are prepared through simply mixing of the two component materials followed by a thermal treatment. The CNTs were functionalized to obtain uniformly-dispersed MWCNTs in the NCM matrix. The electrochemical

  16. HyperForest: A high performance multi-processor architecture for real-time intelligent systems

    SciTech Connect

    Garcia, P. Jr.; Rebeil, J.P.; Pollard, H.

    1997-04-01

    Intelligent Systems are characterized by the intensive use of computer power. The computer revolution of the last few years is what has made possible the development of the first generation of Intelligent Systems. Software for second generation Intelligent Systems will be more complex and will require more powerful computing engines in order to meet real-time constraints imposed by new robots, sensors, and applications. A multiprocessor architecture was developed that merges the advantages of message-passing and shared-memory structures: expendability and real-time compliance. The HyperForest architecture will provide an expandable real-time computing platform for computationally intensive Intelligent Systems and open the doors for the application of these systems to more complex tasks in environmental restoration and cleanup projects, flexible manufacturing systems, and DOE`s own production and disassembly activities.

  17. Teacher and School Leader Effectiveness: Lessons Learned from High-Performing Systems. Issue Brief

    ERIC Educational Resources Information Center

    Alliance for Excellent Education, 2011

    2011-01-01

    In an effort to find best practices in enhancing teacher effectiveness, the Alliance for Excellent Education (Alliance) and the Stanford Center for Opportunity Policy in Education (SCOPE) looked abroad at education systems that appear to have well-developed and effective systems for recruiting, preparing, developing, and retaining teachers and…

  18. A high performance imagery system for unattended ground sensor tactical deployments

    NASA Astrophysics Data System (ADS)

    Hartup, David C.; Bobier, Kevin; Marks, Brian A.; Dirr, William J.; Salisbury, Richard; Brown, Alistair; Cairnduff, Bruce

    2006-05-01

    Modern Unattended Ground Sensor (UGS) systems require transmission of high quality imagery to a remote location while meeting severe operational constraints such as extended mission life using battery operation. This paper describes a robust imagery system that provides excellent performance for both long range and short range stand-off scenarios. The imaging devices include a joint EO and IR solution that features low power consumption, quick turn-on time, high resolution images, advanced AGC and exposure control algorithms, digital zoom, and compact packaging. Intelligent camera operation is provided by the System Controller, which allows fusion of multiple sensor inputs and intelligent target recognition. The System Controller's communications package is interoperable with all SEIWG-005 compliant sensors. Image transmission is provided via VHF, UHF, or SATCOM links. The system has undergone testing at Yuma Proving Ground and Ft. Huachuca, as well as extensive company testing. Results from these field tests are given.

  19. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  20. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  1. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  2. High performance file compression algorithm for video-on-demand e-learning system

    NASA Astrophysics Data System (ADS)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2005-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene: recognizing the a lecturer and a lecture stick by pattern recognition techniques, the video-image compression processing system deletes the figure of a lecturer of low importance and displays only the end point of a lecture stick. It enables us to create the highly compressed lecture video files, which are suitable for the Internet distribution. We compare this technique with the other simple methods such as the lower frame-rate video files, and the ordinary MPEG files. The experimental result shows that the proposed compression processing system is much more effective than the others.

  3. An ultralightweight, evacuated, load-bearing, high-performance insulation system. [for cryogenic propellant tanks

    NASA Technical Reports Server (NTRS)

    Parmley, R. T.; Cunnington, G. R., Jr.

    1978-01-01

    A new hollow-glass microsphere insulation and a flexible stainless-steel vacuum jacket were demonstrated on a flight-weight cryogenic test tank, 1.17 m in diameter. The weight of the system is three times lighter than the most advanced vacuum-jacketed design demonstrated to date, a free-standing honeycomb hard shell with a multilayer insulation system (for a Space Tug application). Design characteristics of the flexible vacuum jacket are presented along with a model describing the insulation thermal performance as a function of boundary temperatures and emittance, compressive load on the insulation and insulation gas pressure. Test data are compared with model predictions and with prior flat-plate calorimeter test results. Potential applications for this insulation system or a derivative of this system include the cryogenic Space Tug, the Single-Stage-to-Orbit Space Shuttle, LH2 fueled subsonic and hypersonic aircraft, and LNG applications.

  4. High-performance sub-terahertz transmission imaging system for food inspection

    PubMed Central

    Ok, Gyeongsik; Park, Kisang; Chun, Hyang Sook; Chang, Hyun-Joo; Lee, Nari; Choi, Sung-Wook

    2015-01-01

    Unlike X-ray systems, a terahertz imaging system can distinguish low-density materials in a food matrix. For applying this technique to food inspection, imaging resolution and acquisition speed ought to be simultaneously enhanced. Therefore, we have developed the first continuous-wave sub-terahertz transmission imaging system with a polygonal mirror. Using an f-theta lens and a polygonal mirror, beam scanning is performed over a range of 150 mm. For obtaining transmission images, the line-beam is incorporated with sample translation. The imaging system demonstrates that a pattern with 2.83 mm line-width at 210 GHz can be identified with a scanning speed of 80 mm/s. PMID:26137392

  5. High Performance Molecular Dynamic Simulation on Single and Multi-GPU Systems

    SciTech Connect

    Villa, Oreste; Chen, Long; Krishnamoorthy, Sriram

    2010-05-30

    The programming techniques supported and employed on these GPUs and Multi-GPUs systems are not sufficient to address problems exhibiting irregular, and unbalanced workload such as Molecular Dynamic (MD) simulations of systems with non-uniform densities. In this paper, we propose a task-based dynamic load-balancing solution to employ on MD simulations for single- and multi-GPU systems. The solution allows load balancing at a finer granularity than what is supported in existing APIs such as NVIDIA’s CUDA. Experimental results with a single-GPU configuration show that our fine-grained task solution can utilize the hardware more efficiently than the CUDA scheduler. On multi-GPU systems, our solution achieves near-linear speedup, load balance, and significant performance improvement over techniques based on standard CUDA APIs.

  6. High performance CCD camera system for digitalisation of 2D DIGE gels.

    PubMed

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. PMID:27252121

  7. Energy Performance Testing of Asetek's RackCDU System at NREL's High Performance Computing Data Center

    SciTech Connect

    Sickinger, D.; Van Geet, O.; Ravenscroft, C.

    2014-11-01

    In this study, we report on the first tests of Asetek's RackCDU direct-to-chip liquid cooling system for servers at NREL's ESIF data center. The system was simple to install on the existing servers and integrated directly into the data center's existing hydronics system. The focus of this study was to explore the total cooling energy savings and potential for waste-heat recovery of this warm-water liquid cooling system. RackCDU captured up to 64% of server heat into the liquid stream at an outlet temperature of 89 degrees F, and 48% at outlet temperatures approaching 100 degrees F. This system was designed to capture heat from the CPUs only, indicating a potential for increased heat capture if memory cooling was included. Reduced temperatures inside the servers caused all fans to reduce power to the lowest possible BIOS setting, indicating further energy savings potential if additional fan control is included. Preliminary studies manually reducing fan speed (and even removing fans) validated this potential savings but could not be optimized for these working servers. The Asetek direct-to-chip liquid cooling system has been in operation with users for 16 months with no necessary maintenance and no leaks.

  8. High performance in low-flow solar domestic hot water systems

    SciTech Connect

    Dayan, M.

    1997-12-31

    Low-flow solar hot water heating systems employ flow rates on the order of 1/5 to 1/10 of the conventional flow. Low-flow systems are of interest because the reduced flow rate allows smaller diameter tubing, which is less costly to install. Further, low-flow systems result in increased tank stratification. Lower collector inlet temperatures are achieved through stratification and the useful energy produced by the collector is increased. The disadvantage of low-flow systems is the collector heat removal factor decreases with decreasing flow rate. Many solar domestic hot water systems require an auxiliary electric source to operate a pump in order to circulate fluid through the solar collector. A photovoltaic driven pump can be used to replace the standard electrical pump. PV driven pumps provide an ideal means of controlling the flow rate, as pumps will only circulate fluid when there is sufficient radiation. Peak performance was always found to occur when the heat exchanger tank-side flow rate was approximately equal to the average load flow rate. For low collector-side flow rates, a small deviation from the optimum flow rate will dramatically effect system performance.

  9. High-performance radial AMTEC cell design for ultra-high-power solar AMTEC systems

    SciTech Connect

    Hendricks, T.J.; Huang, C.

    1999-07-01

    Alkali Metal Thermal to Electric Conversion (AMTEC) technology is rapidly maturing for potential application in ultra-high-power solar AMTEC systems required by potential future US Air Force (USAF) spacecraft missions in medium-earth and geosynchronous orbits (MEO and GEO). Solar thermal AMTEC power systems potentially have several important advantages over current solar photovoltaic power systems in ultra-high-power spacecraft applications for USAF MEO and GEO missions. This work presents key aspects of radial AMTEC cell design to achieve high cell performance in solar AMTEC systems delivering larger than 50 kW(e) to support high power USAF missions. These missions typically require AMTEC cell conversion efficiency larger than 25%. A sophisticated design parameter methodology is described and demonstrated which establishes optimum design parameters in any radial cell design to satisfy high-power mission requirements. Specific relationships, which are distinct functions of cell temperatures and pressures, define critical dependencies between key cell design parameters, particularly the impact of parasitic thermal losses on Beta Alumina Solid Electrolyte (BASE) area requirements, voltage, number of BASE tubes, and system power production for both maximum power-per-BASE-area and optimum efficiency conditions. Finally, some high-level system tradeoffs are demonstrated using the design parameter methodology to establish high-power radial cell design requirements and philosophy. The discussion highlights how to incorporate this methodology with sophisticated SINDA/FLUINT AMTEC cell modeling capabilities to determine optimum radial AMTEC cell designs.

  10. Structural integrity and damage assessment of high performance arresting cable systems using an embedded distributed fiber optic sensor (EDIFOS) system

    NASA Astrophysics Data System (ADS)

    Mendoza, Edgar A.; Kempen, Cornelia; Sun, Sunjian; Esterkin, Yan; Prohaska, John; Bentley, Doug; Glasgow, Andy; Campbell, Richard

    2010-04-01

    Redondo Optics in collaboration with the Cortland Cable Company, TMT Laboratories, and Applied Fiber under a US Navy SBIR project is developing an embedded distributed fiber optic sensor (EDIFOSTM) system for the real-time, structural health monitoring, damage assessment, and lifetime prediction of next generation synthetic material arresting gear cables. The EDIFOSTM system represents a new, highly robust and reliable, technology that can be use for the structural damage assessment of critical cable infrastructures. The Navy is currently investigating the use of new, all-synthetic- material arresting cables. The arresting cable is one of the most stressed components in the entire arresting gear landing system. Synthetic rope materials offer higher performance in terms of the strength-to-weight characteristics, which improves the arresting gear engine's performance resulting in reduced wind-over-deck requirements, higher aircraft bring-back-weight capability, simplified operation, maintenance, supportability, and reduced life cycle costs. While employing synthetic cables offers many advantages for the Navy's future needs, the unknown failure modes of these cables remains a high technical risk. For these reasons, Redondo Optics is investigating the use of embedded fiber optic sensors within the synthetic arresting cables to provide real-time structural assessment of the cable state, and to inform the operator when a particular cable has suffered impact damage, is near failure, or is approaching the limit of its service lifetime. To date, ROI and its collaborators have developed a technique for embedding multiple sensor fibers within the strands of high performance synthetic material cables and use the embedded fiber sensors to monitor the structural integrity of the cable structures during tensile and compressive loads exceeding over 175,000-lbsf without any damage to the cable structure or the embedded fiber sensors.

  11. Spectra-view: A high performance, low-cost multispectral airborne imaging system

    SciTech Connect

    Helder, D.

    1996-11-01

    Although a variety of airborne platforms are available for collecting remote sensing data, a niche exists for a low cost, compact systemd capable of collecting accurate visible and infrared multispectral data in a digital format. To fill this void, an instrument known as Spectra-View was developed by Airborne Data Systems. Multispectral data is collected in the visible and near-infrared using an array of CCD cameras with appropriate spectral filtering. Infrared imaging is accomplished using commercially available cameras. Although the current system images in five spectral bands, a modular design approach allows various configurations for imaging in the visible and infrared regions with up to 10 or more channels. It was built entirely through integration of readily available commercial components, is compact enough to fly in an aircraft as small as a Cessna 172, and can record imagery at airspeeds in excess of 150 knots. A GPS-based navigation system provides a course deviation indicator for the pilot to follow and allows for georeferencing of the data. To maintain precise pointing knowledge, and at the same time keep system cost low, attitude sensors are mounted directly with the cameras rather than using a stabilized mounting system. Information is collect during camera firing of aircraft/camera attitude along the yaw, pitch, and roll axes. All data is collected in a digital format on a hard disk that is removable during flight so that virtually unlimited amounts of data may be recorded. Following collection, imagery is readily available for viewing and incorporation into computer-based systems for analysis and reduction. Ground processing software has been developed to perform radiometric calibration and georeference the imagery. Since June, 1995, the system has been collecting high-quality data in a variety of applications for numerous customers including applications in agriculture, forestry, and global change research. Several examples will be presented.

  12. High performance computational integral imaging system using multi-view video plus depth representation

    NASA Astrophysics Data System (ADS)

    Shi, Shasha; Gioia, Patrick; Madec, Gérard

    2012-12-01

    Integral imaging is an attractive auto-stereoscopic three-dimensional (3D) technology for next-generation 3DTV. But its application is obstructed by poor image quality, huge data volume and high processing complexity. In this paper, a new computational integral imaging (CII) system using multi-view video plus depth (MVD) representation is proposed to solve these problems. The originality of this system lies in three aspects. Firstly, a particular depth-image-based rendering (DIBR) technique is used in encoding process to exploit the inter-view correlation between different sub-images (SIs). Thereafter, the same DIBR method is applied in the display side to interpolate virtual SIs and improve the reconstructed 3D image quality. Finally, a novel parallel group projection (PGP) technique is proposed to simplify the reconstruction process. According to experimental results, the proposed CII system improves compression efficiency and displayed image quality, while reducing calculation complexity. [Figure not available: see fulltext.

  13. Optically synchronized dual-channel terahertz signals for high-performance transmitter/receiver system

    NASA Astrophysics Data System (ADS)

    Shimizu, Naofumi; Oh, Kyoung-Hwan; Kohjiro, Satoshi; Kikuchi, Ken'ichi; Wakatsuki, Atsushi; Kukutsu, Naoya; Kado, Yuichi

    2010-02-01

    We developed a high-sweeping-speed optically synchronized dual-channel terahertz signal generator, in which the frequency difference between the two terahertz signals is independent of the frequency of the terahertz signals themselves. This feature is essential for heterodyne detection of terahertz signals with various frequencies. With this generator, a frequency-sweepable terahertz transmitter (Tx)/receiver (Rx) system with a wide dynamic range can be realized without sacrificing the high frequency-sweeping speed. Absorption line measurements for water vapor and nitrous oxide show that the developed Tx/Rx system can detect gas absorption with the optical depth of 0.04 or less. This result indicates the potential of the system as a remote gas sensor and gas analyzer.

  14. Building America Best Practices Series, Volume 6: High-Performance Home Technologies: Solar Thermal & Photovoltaic Systems

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Ruiz, Kathleen A.; Steward, Heidi E.; Love, Pat M.

    2007-06-04

    This guide is was written by PNNL for the US Department of Energy's Building America program to provide information for residential production builders interested in building near zero energy homes. The guide provides indepth descriptions of various roof-top photovoltaic power generating systems for homes. The guide also provides extensive information on various designs of solar thermal water heating systems for homes. The guide also provides construction company owners and managers with an understanding of how solar technologies can be added to their homes in a way that is cost effective, practical, and marketable. Twelve case studies provide examples of production builders across the United States who are building energy-efficient homes with photovoltaic or solar water heating systems.

  15. A High Performance Sample Delivery System for Closed-Path Eddy Covariance Measurements

    NASA Astrophysics Data System (ADS)

    Nottrott, Anders; Leggett, Graham; Alstad, Karrin; Wahl, Edward

    2016-04-01

    The Picarro G2311-f Cavity Ring-Down Spectrometer (CRDS) measures CO2, CH4 and water vapor at high frequency with parts-per-billion (ppb) sensitivity for eddy covariance, gradient, eddy accumulation measurements. In flux mode, the analyzer measures the concentration of all three species at 10 Hz with a cavity gas exchange time of 5 Hz. We developed an enhanced pneumatic sample delivery system for drawing air from the atmosphere into the cavity. The new sample delivery system maintains a 5 Hz gas exchange time, and allows for longer sample intake lines to be configured in tall tower applications (> 250 ft line at sea level). We quantified the system performance in terms of vacuum pump head room and 10-90% concentration step response for several intake line lengths at various elevations. Sample eddy covariance data are shown from an alfalfa field in Northern California, USA.

  16. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  17. Damage-mitigating control of space propulsion systems for high performance and extended life

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Wu, Min-Kuang; Dai, Xiaowen; Carpino, Marc; Lorenzo, Carl F.

    1993-01-01

    Calculations are presented showing that a substantial improvement in service life of a reusable rocket engine can be achieved by an insignificant reduction in the system dynamic performance. The paper introduces the concept of damage mitigation and formulates a continuous-time model of fatigue damage dynamics. For control of complex mechanical systems, damage prediction and damage mitigation are carried out based on the available sensory and operational information such that the plant can be inexpensively maintained and safely and efficiently steered under diverse operating conditions. The results of simulation experiments are presented for transient operations of a reusable rocket engine.

  18. Building High-Performing and Improving Education Systems: Quality Assurance and Accountability. Review

    ERIC Educational Resources Information Center

    Slater, Liz

    2013-01-01

    Monitoring, evaluation, and quality assurance in their various forms are seen as being one of the foundation stones of high-quality education systems. De Grauwe, writing about "school supervision" in four African countries in 2001, linked the decline in the quality of basic education to the cut in resources for supervision and support.…

  19. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  20. High performance computing in biology: multimillion atom simulations of nanoscale systems.

    PubMed

    Sanbonmatsu, K Y; Tung, C-S

    2007-03-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nano-scale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  1. Aim Higher: Lofty Goals and an Aligned System Keep a High Performer on Top

    ERIC Educational Resources Information Center

    McCommons, David P.

    2014-01-01

    Every school district is feeling the pressure to ensure higher academic achievement for all students. A focus on professional learning for an administrative team not only improves student learning and achievement, but also assists in developing a systemic approach for continued success. This is how the Fox Chapel Area School District in…

  2. A high performance low cost flow-through solar water pasteurization system

    SciTech Connect

    Duff, W.S.; Hodgson, D.

    1999-07-01

    In the rural areas of developing countries, boiling of water is the means most often used for purifying water for food preparation and drinking. However, boiling is relatively expensive, consumes substantial amounts of fossil energy and the associated wood gathering contributes to depletion of forests. Solar water pasteurization is one of the most promising approaches for a cost-effective, robust and reliable solution to these problems. The authors are developing a solar water pasteurization system based on an evacuated solar collector, and appropriately matched heat exchanger and a system for regulating the pasteurization temperature and holding time. The unit is completely passive, requiring no power of any sort. As part of the design requirements, the authors have imposed low fabrication and installation cost goals. Experimental versions have been fabricated for a materials cost of under $150 US. The authors have designed, built and experimentally evaluated several designs. The most recent testing was performed on a system using water density as the basis for regulating the pasteurization temperature and holding time. They have tested and are currently refining a new design based on an innovative regulation system which results in a system that is more compact and robust than with the water density regulation approach. Once testing is completed, they have an arrangement to place two units at a school in Uganda where they will be exposed to the actual conditions of their use in developing countries. They will report the details of current and previous designs, provide experimental results and, in the presentation in April, relate initial experiences with the units in Uganda.

  3. Whisker: a client-server high-performance multimedia research control system.

    PubMed

    Cardinal, Rudolf N; Aitken, Michael R F

    2010-11-01

    We describe an original client-server approach to behavioral research control and the Whisker system, a specific implementation of this design. The server process controls several types of hardware, including digital input/output devices, multiple graphical monitors and touchscreens, keyboards, mice, and sound cards. It provides a way to access this hardware for client programs, communicating with them via a simple text-based network protocol based on the standard Internet protocol. Clients to implement behavioral tasks may be written in any network-capable programming language. Applications to date have been in experimental psychology and behavioral and cognitive neuroscience, using rodents, humans, nonhuman primates, dogs, pigs, and birds. This system is flexible and reliable, although there are potential disadvantages in terms of complexity. Its design, features, and performance are described. PMID:21139173

  4. Fair share on high performance computing systems : what does fair really mean?

    SciTech Connect

    Clearwater, Scott Harvey; Kleban, Stephen David

    2003-03-01

    We report on a performance evaluation of a Fair Share system at the ASCI Blue Mountain supercomputer cluster. We study the impacts of share allocation under Fair Share on wait times and expansion factor. We also measure the Service Ratio, a typical figure of merit for Fair Share systems, with respect to a number of job parameters. We conclude that Fair Share does little to alter important performance metrics such as expansion factor. This leads to the question of what Fair Share means on cluster machines. The essential difference between Fair Share on a uni-processor and a cluster is that the workload on a cluster is not fungible in space or time. We find that cluster machines must be highly utilized and support checkpointing in order for Fair Share to function more closely to the spirit in which it was originally developed.

  5. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    NASA Astrophysics Data System (ADS)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  6. Platform-Based Design for the Low Complexity and High Performance De-Interlacing System

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han; Lin, Hsueh-Liang

    With the development of digital TV system, how to display the NTSC signal in digital TV system is a problem. De-interlacing is an algorithm to solve it. In previous papers, using motion compensation (MC) method for de-interlacing needs lots of computation complexity and it is not easy to implement in hardware. In this paper, a content adaptive de-interlacing algorithm is proposed. Our algorithm is based on the motion adaptive (MA) method which combines the advantages of intra-field and inter-field method. We propose a block type decision mechanism to predict the video content instead of a blind processing with MC method throughout the entire frame. Additionally, in intra-field method, we propose the edge-base adaptive weight average (EAWA) method to achieve a better performance and smooth the edge and stripe. In order to demonstrate our algorithm, we implement the de-interlacing system on the DSP platform with thorough complexity analysis. Compared to MC method, we not only achieve higher video quality in objective and subjective view, but also consume lower computation power. From the profiling on CPU run-time analysis, the proposed algorithm is only one-fifth of MC method. At the DSP demonstration board, the saving ratio is about 54% to 96%.

  7. High Performance Operation Control for Heat Driven Heat Pump System using Metal Hydride

    NASA Astrophysics Data System (ADS)

    Okamoto, Hideyuki; Masuda, Masao; Kozawa, Yoshiyuki

    lt is recognized that COP of heat driven heat pump system using metal hydride is 0.3-0.4 in general. In order to rise COP, we have proposed two kinds of specific operation control; the control of cycle change time according to cold heat load and the control of cooling water temperature according to outside air wet-bulb temperature. The characteristics of the heat pump system using metal hydride have grasped by various experiments and simulations. The validity of the simulation model has been confirmed by comparing with experimental results. As results of the simulations programmed for the actual operation control month by month, yearly COP has risen till 0.5-0.6 for practical scale air-conditioning system without regard for the building use. By the operation control hour by hour, yearly COP has risen till 0.6-0.65. Moreover, in the office building case added 40% sensible heat recovery, yearly COP has risen more than 0.8.

  8. Building a medical multimedia database system to integrate clinical information: an application of high-performance computing and communications technology.

    PubMed

    Lowe, H J; Buchanan, B G; Cooper, G F; Vries, J K

    1995-01-01

    The rapid growth of diagnostic-imaging technologies over the past two decades has dramatically increased the amount of nontextual data generated in clinical medicine. The architecture of traditional, text-oriented, clinical information systems has made the integration of digitized clinical images with the patient record problematic. Systems for the classification, retrieval, and integration of clinical images are in their infancy. Recent advances in high-performance computing, imaging, and networking technology now make it technologically and economically feasible to develop an integrated, multimedia, electronic patient record. As part of The National Library of Medicine's Biomedical Applications of High-Performance Computing and Communications program, we plan to develop Image Engine, a prototype microcomputer-based system for the storage, retrieval, integration, and sharing of a wide range of clinically important digital images. Images stored in the Image Engine database will be indexed and organized using the Unified Medical Language System Metathesaurus and will be dynamically linked to data in a text-based, clinical information system. We will evaluate Image Engine by initially implementing it in three clinical domains (oncology, gastroenterology, and clinical pathology) at the University of Pittsburgh Medical Center. PMID:7703940

  9. High performance 3-coil wireless power transfer system for the 512-electrode epiretinal prosthesis.

    PubMed

    Zhao, Yu; Nandra, Mandheerej; Yu, Chia-Chen; Tai, Yu-chong

    2012-01-01

    The next-generation retinal prostheses feature high image resolution and chronic implantation. These features demand the delivery of power as high as 100 mW to be wireless and efficient. A common solution is the 2-coil inductive power link, used by current retinal prostheses. This power link tends to include a larger-size extraocular receiver coil coupled to the external transmitter coil, and the receiver coil is connected to the intraocular electrodes through a trans-sclera trans-choroid cable. In the long-term implantation of the device, the cable may cause hypotony (low intraocular pressure) and infection. However, when a 2-coil system is constructed from a small-size intraocular receiver coil, the efficiency drops drastically which may induce over heat dissipation and electromagnetic field exposure. Our previous 2-coil system achieved only 7% power transfer. This paper presents a fully intraocular and highly efficient wireless power transfer system, by introducing another inductive coupling link to bypass the trans-sclera trans-choroid cable. With the specific equivalent load of our customized 512-electrode stimulator, the current 3-coil inductive link was measured to have the overall power transfer efficiency around 36%, with 1-inch separation in saline. The high efficiency will favorably reduce the heat dissipation and electromagnetic field exposure to surrounding human tissues. The effect of the eyeball rotation on the power transfer efficiency was investigated as well. The efficiency can still maintain 14.7% with left and right deflection of 30 degree during normal use. The surgical procedure for the coils' implantation into the porcine eye was also demonstrated. PMID:23367438

  10. Commodity CPU-GPU System for Low-Cost , High-Performance Computing

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, S.; Weiss, R. M.; Barnett, G. A.; Yuen, D. A.

    2009-12-01

    We have put together a desktop computer system for under 2.5 K dollars from commodity components that consist of one quad-core CPU (Intel Core 2 Quad Q6600 Kentsfield 2.4GHz) and two high end GPUs (nVidia's GeForce GTX 295 and Tesla C1060). A 1200 watt power supply is required. On this commodity system, we have constructed an easy-to-use hybrid computing environment, in which Message Passing Interface (MPI) is used for managing the working loads, for transferring the data among different GPU devices, and for minimizing the need of CPU’s memory. The test runs using the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library show that the speed ups for double precision calculations can be greater than 10 (GPU vs. CPU) and they are bigger (> 20) for single precision calculations. In addition we have enabled the combination of Matlab with CUDA for interactive visualization through MPI, i.e., two GPU devices are used for simulation and one GPU device is used for visualizing the computing results as the simulation goes. Our experience with this commodity system has shown that running multiple applications on one GPU device or running one application across multiple GPU devices can be done as conveniently as on CPUs. With NVIDIA CEO Jen-Hsun Huang's claim that over the next 6 years GPU processing power will increase by 570x compared to the 3x for CPUs, future low-cost commodity computers such as ours may be a remedy for the long wait queues of the world's supercomputers, especially for small- and mid-scale computation. Our goal here is to explore the limits and capabilities of this emerging technology and to get ourselves ready to run large-scale simulations on the next generation of computing environment, which we believe will hybridize CPU and GPU architectures.

  11. High performance monolithic power management system with dynamic maximum power point tracking for microbial fuel cells.

    PubMed

    Erbay, Celal; Carreon-Bautista, Salvador; Sanchez-Sinencio, Edgar; Han, Arum

    2014-12-01

    Microbial fuel cell (MFC) that can directly generate electricity from organic waste or biomass is a promising renewable and clean technology. However, low power and low voltage output of MFCs typically do not allow directly operating most electrical applications, whether it is supplementing electricity to wastewater treatment plants or for powering autonomous wireless sensor networks. Power management systems (PMSs) can overcome this limitation by boosting the MFC output voltage and managing the power for maximum efficiency. We present a monolithic low-power-consuming PMS integrated circuit (IC) chip capable of dynamic maximum power point tracking (MPPT) to maximize the extracted power from MFCs, regardless of the power and voltage fluctuations from MFCs over time. The proposed PMS continuously detects the maximum power point (MPP) of the MFC and matches the load impedance of the PMS for maximum efficiency. The system also operates autonomously by directly drawing power from the MFC itself without any external power. The overall system efficiency, defined as the ratio between input energy from the MFC and output energy stored into the supercapacitor of the PMS, was 30%. As a demonstration, the PMS connected to a 240 mL two-chamber MFC (generating 0.4 V and 512 μW at MPP) successfully powered a wireless temperature sensor that requires a voltage of 2.5 V and consumes power of 85 mW each time it transmit the sensor data, and successfully transmitted a sensor reading every 7.5 min. The PMS also efficiently managed the power output of a lower-power producing MFC, demonstrating that the PMS works efficiently at various MFC power output level. PMID:25365216

  12. High Performance Fuel Cell and Electrolyzer Membrane Electrode Assemblies (MEAs) for Space Energy Storage Systems

    NASA Technical Reports Server (NTRS)

    Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.

    2012-01-01

    Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.

  13. High-performance digital triggering system for phase-controlled rectifiers

    SciTech Connect

    Olsen, R.E.

    1983-01-01

    The larger power supplies used to power accelerator magnets are most commonly polyphase rectifiers using phase control. While this method is capable of handling impressive amounts of power, it suffers from one serious disadvantage, namely that of subharmonic ripple. Since the stability of the stored beam depends to a considerable extent on the regulation of the current in the bending magnets, subharmonic ripple, especially that of low frequency, can have a detrimental effect. At the NSLS, we have constructed a 12-pulse, phase control system using digital signal processing techniques that essentially eliminates subharmonic ripple.

  14. High-performance fault-tolerant VLSI systems using micro rollback

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval; Tremblay, Marc

    1990-01-01

    A technique called micro rollback, which allows most of the performance penalty for concurrent error detection to be eliminated, is presented. Detection is performed in parallel with the transmission of information between modules, thus removing the delay for detection from the critical path. Erroneous information may thus reach its destination module several clock cycles before an error indication. Operations performed on this erroneous information are undone using a hardware mechanism for fast rollback of a few cycles. The implementation of a VLSI processor capable of micro rollback is discussed, as well as several critical issues related to its use in a complete system.

  15. A high-performance multilane microdevice system designed for the DNA forensics laboratory.

    PubMed

    Goedecke, Nils; McKenna, Brian; El-Difrawy, Sameh; Carey, Loucinda; Matsudaira, Paul; Ehrlich, Daniel

    2004-06-01

    We report preliminary testing of "GeneTrack", an instrument designed for the specific application of multiplexed short tandem repeat (STR) DNA analysis. The system supports a glass microdevice with 16 lanes of 20 cm effective length and double-T cross injectors. A high-speed galvanometer-scanned four-color detector was specially designed to accommodate the high elution rates on the microdevice. All aspects of the system were carefully matched to practical crime lab requirements for rapid reproducible analysis of crime-scene DNA evidence in conjunction with the United States DNA database (CODIS). Statistically significant studies demonstrate that an absolute, three-sigma, peak accuracy of 0.4-0.9 base pair (bp) can be achieved for the CODIS 13-locus multiplex, utilizing a single channel per sample. Only 0.5 microL of PCR product is needed per lane, a significant reduction in the consumption of costly chemicals in comparison to commercial capillary machines. The instrument is also designed to address problems in temperature-dependent decalibration and environmental sensitivity, which are weaknesses of the commercial capillary machines for the forensics application. PMID:15188257

  16. High performance electrophoresis system for site-specific entrapment of nanoparticles in a nanoarray

    NASA Astrophysics Data System (ADS)

    Han, Jin-Hee; Lakshmana, Sudheendra; Kim, Hee-Joo; Hass, Elizabeth A.; Gee, Shirley; Hammock, Bruce D.; Kennedy, Ian

    2010-02-01

    A nanoarray, integrated with an electrophoretic system, was developed to trap nanoparticles into their corresponding nanowells. This nanoarray overcomes the complications of losing the function and activity of the protein binding to the surface in conventional microarrays by using minimum amounts of sample. The nanoarray is also superior to other biosensors that use immunoassays in terms of lowering the limit of detection to the femto- or atto-molar level. In addition, our electrophoretic particle entrapment system (EPES) is able to effectively trap the nanoparticles using a low trapping force for a short duration. Therefore, good conditions for biological samples conjugated with particles can be maintained. The channels were patterned onto a bi-layer consisting of a PMMA and LOL coating on conductive indium tin oxide (ITO)-coated glass slide by using e-beam lithography. The suspensions of 170 nm-nanoparticles then were added to the chip that was connected to a positive voltage. On top of the droplet, another ITO-coated-glass slide was covered and connected to a ground terminal. Negatively charged fluorescent nanoparticles (blue emission) were selectively trapped onto the ITO surface at the bottom of the wells by following electric field lines. Numerical modeling was performed by using commercially available software, COMSOL Multiphysics to provide better understanding about the phenomenon of electrophoresis in a nanoarray. Simulation results are also useful for optimally designing a nanoarray for practical applications.

  17. Design of high performance multivariable control systems for supermaneuverable aircraft at high angle of attack

    NASA Technical Reports Server (NTRS)

    Valavani, Lena

    1995-01-01

    The main motivation for the work under the present grant was to use nonlinear feedback linearization methods to further enhance performance capabilities of the aircraft, and robustify its response throughout its operating envelope. The idea was to use these methods in lieu of standard Taylor series linearization, in order to obtain a well behaved linearized plant, in its entire operational regime. Thus, feedback linearization was going to constitute an 'inner loop', which would then define a 'design plant model' to be compensated for robustness and guaranteed performance in an 'outer loop' application of modern linear control methods. The motivation for this was twofold; first, earlier work had shown that by appropriately conditioning the plant through conventional, simple feedback in an 'inner loop', the resulting overall compensated plant design enjoyed considerable enhancement of performance robustness in the presence of parametric uncertainty. Second, the nonlinear techniques did not have any proven robustness properties in the presence of unstructured uncertainty; a definition of robustness (and performance) is very difficult to achieve outside the frequency domain; to date, none is available for the purposes of control system design. Thus, by proper design of the outer loop, such properties could still be 'injected' in the overall system.

  18. Metal-based anode for high performance bioelectrochemical systems through photo-electrochemical interaction

    NASA Astrophysics Data System (ADS)

    Liang, Yuxiang; Feng, Huajun; Shen, Dongsheng; Long, Yuyang; Li, Na; Zhou, Yuyang; Ying, Xianbin; Gu, Yuan; Wang, Yanfeng

    2016-08-01

    This paper introduces a novel composite anode that uses light to enhance current generation and accelerate biofilm formation in bioelectrochemical systems. The composite anode is composed of 316L stainless steel substrate and a nanostructured α-Fe2O3 photocatalyst (PSS). The electrode properties, current generation, and biofilm properties of the anode are investigated. In terms of photocurrent, the optimal deposition and heat-treatment times are found to be 30 min and 2 min, respectively, which result in a maximum photocurrent of 0.6 A m-2. The start-up time of the PSS is 1.2 days and the maximum current density is 2.8 A m-2, twice and 25 times that of unmodified anode, respectively. The current density of the PSS remains stable during 20 days of illumination. Confocal laser scanning microscope images show that the PSS could benefit biofilm formation, while electrochemical impedance spectroscopy indicates that the PSS reduce the charge-transfer resistance of the anode. Our findings show that photo-electrochemical interaction is a promising way to enhance the biocompatibility of metal anodes for bioelectrochemical systems.

  19. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    SciTech Connect

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  20. Low-cost high performance adaptive optics real-time controller in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Chen, Shanqiu; Liu, Chao; Zhao, Enyi; Xian, Hao; Xu, Bing; Ye, Yutang

    2014-11-01

    This paper proposed a low-cost and high performance adaptive optics real-time controller in free space optical communication system. Real-time controller is constructed with a 4-core CPU with Linux operation system patched with Real-Time Application Interface (RTAI) and a frame-grabber, and the whole cost is below $6000. Multi-core parallel processing scheme and SSE instruction optimization for reconstruction process result in about 5 speedup, and overall processing time for this 137-element adaptive optic system can reach below 100 us and with latency about 50 us by utilizing streamlined processing scheme, which meet the requirement of processing at frequency over 1709 Hz. Real-time data storage system designed by circle buffer make this system to store consecutive image frames and provide an approach to analysis the image data and intermediate data such as slope information.

  1. A high-performance network for a distributed-control system

    NASA Astrophysics Data System (ADS)

    Cuttone, G.; Aghion, F.; Giove, D.

    1989-04-01

    Local area networks play a central rule in modern distributed-control systems for accelerators. For a superconducting cyclotron under construction at the University of Milan, an optical Ethernet network has been implemented for the interconnection of multicomputer-based stations. Controller boards, with VLSI protocol chips, have been used. The higher levels of the ISO OSI model have been implemented to suit real-time control requirements. The experimental setup for measuring the data throughput between stations will be described. The effect of memory-to-memory data transfer with respect to the packet size has been studied for packets ranging from 200 bytes to 10 Kbytes. Results, showing the data throughput to range from 0.2 to 1.1 Mbit/s, will be discussed.

  2. POPE: A distributed query system for high performance analysis of very large persistent object stores

    SciTech Connect

    Fischler, M.S.; Isely, M.C.; Nigri, A.M.; Rinaldo, F.J.

    1996-01-01

    Analysis of large physics data sets is a major computing task at Fermilab. One step in such an analysis involves culling ``interesting`` events via the use of complex query criteria. What makes this unusual is the scale required: 100`s of gigabytes of event data must be scanned at 10`s of megabytes per second for the typical queries that are applied, and data must be extracted from 10`s of terabytes based on the result of the query. The Physics Object Persistency Manager (POPM) system is a solution tailored to this scale of problem. A running POPM environment can support multiple queries in progress, each scanning at rates exceeding 10 megabytes per second, all of which are sharing access to a very large persistent address space distributed across multiple disks on multiple hosts. Specifically, POPM employs the following techniques to permit this scale of performance and access: Persistent objects: Experimental data to be scanned is ``populated`` as a data structure into the persistent address space supported by POPM. C++ classes with a few key overloaded operators provide nearly transparent semantics for access to the persistent storage. Distributed and parallel I/O: The persistent address space is automatically distributed across disks of multiple ``I/O nodes`` within the POPM system. A striping unit concept is implemented in POPM, permitting fast parallel I/O across the storage nodes, even for small single queries. Efficient Shared access: POPM implements an efficient mechanism for arbitration and multiplexing of I/O access among multiple queries on the same or separate compute nodes.

  3. Toward server-side, high performance climate change data analytics in the Earth System Grid Federation (ESGF) eco-system

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Williams, Dean; Aloisio, Giovanni

    2016-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF

  4. Compressive sensing based Bayesian sparse channel estimation for OFDM communication systems: high performance and low complexity.

    PubMed

    Gui, Guan; Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012

  5. Coal-fired high performance power generating system. Quarterly progress report, October 1--December 31, 1992

    SciTech Connect

    Not Available

    1992-12-31

    Our team has outlined a research plan based on an optimized analysis of a 250 MWe combined cycle system applicable to both frame type and aeroderivative gas turbines. Under the constraints of the cycle analysis we have designed a high temperature advanced furnace (FUTAF) which integrates several combustor and air heater designs with appropriate ash management procedures. The Cycle Optimization effort under Task 2 outlines the evolution of our designs. The basic combined cycle approach now includes exhaust gas recirculation to quench the flue gas before it enters the convective air heater. By selecting the quench gas from a downstream location it will be clean enough and cool enough (ca. 300F) to be driven by a commercially available fan and still minimize the volume of the convective air heater. Further modeling studies on the long axial flame, under Task 3, have demonstrated that this configuration is capable of providing the necessary energy flux to the radiant air panels. This flame with its controlled mixing constrains the combustion to take place in a fuel rich environment, thus minimizing the NO{sub x} production. Recent calculations indicate that the NO{sub x} produced is low enough that the SNCR section can further reduce it to within the DOE goal of 0. 15 lbs/MBTU of fuel input. Also under Task 3 the air heater design optimization continued.

  6. High-Performance Water Electrolysis System with Double Nanostructured Superaerophobic Electrodes.

    PubMed

    Xu, Wenwen; Lu, Zhiyi; Wan, Pengbo; Kuang, Yun; Sun, Xiaoming

    2016-05-01

    Catalysts screening and structural optimization are both essential for pursuing a high-efficient water electrolysis system (WES) with reduced energy supply. This study demonstrates an advanced WES with double superaerophobic electrodes, which are achieved by constructing a nanostructured NiMo alloy and NiFe layered double hydroxide (NiFe-LDH) films for hydrogen evolution and oxygen evolution reactions, respectively. The superaerophobic property gives rise to significantly reduced adhesion forces to gas bubbles and thereby accelerates the hydrogen and oxygen bubble releasing behaviors. Benefited from these metrics and the high intrinsic activities of catalysts, this WES affords an early onset potential (≈1.5 V) for water splitting and ultrafast catalytic current density increase (≈0.83 mA mV(-1) ), resulting in ≈2.69 times higher performance compared to the commercial Pt/C and IrO2 /C catalysts based counterpart under 1.9 V. Moreover, enhanced performance at high temperature as well as prominent stability further demonstrate the practical application of this WES. PMID:26997618

  7. High performance CMOS image sensor for digitally fused day/night vision systems

    NASA Astrophysics Data System (ADS)

    Fowler, Boyd; Vu, Paul; Liu, Chiao; Mims, Steve; Do, Hung; Li, Wang; Appelbaum, Jeff

    2010-04-01

    We present the performance of a CMOS image sensor optimized for next generation fused day/night vision systems. The device features 5T pixels with pinned photodiodes on a 6.5μm pitch with integrated micro-lens. The 5T pixel architecture enables both correlated double sampling (CDS) to reduce noise for night time operation, and a lateral antiblooming drain for day time operation. The measured peak quantum efficiency of the sensor is above 55% at 600nm, and the median read noise is less than 1e- RMS at room temperature. The sensor features dual gain 11-bit data output ports and supports 30 fps and 60 fps. The full well capacity is greater than 30ke-, the dark current is less than 3.8pA/cm2 at 20ºC, and the MTF at 77 lp/mm is 0.4 at 550nm. The sensor also achieves an intra-scene linear dynamic range of greater than 90dB (30000:1) for night time operation, and an inter-scene linear dynamic range of greater than 150dB for complete day/night operability.

  8. Architecture of a high-performance PACS based on a shared file system

    NASA Astrophysics Data System (ADS)

    Glicksman, Robert A.; Wilson, Dennis L.; Perry, John H.; Prior, Fred W.

    1992-07-01

    The Picture Archive and Communication System developed by Loral Western Development Laboratories and Siemens Gammasonics Incorporated utilizes an advanced, high speed, fault tolerant image file server or Working Storage Unit (WSU) combined with 100 Mbit per second fiber optic data links. This central shared file server is capable of supporting the needs of more than one hundred workstations and acquisition devices at interactive rates. If additional performance is required, additional working storage units may be configured in a hyper-star topology. Specialized processing and display hardware is used to enhance Apple Macintosh personal computers to provide a family of low cost, easy to use, yet extremely powerful medical image workstations. The Siemens LiteboxTM application software provides a consistent look and feel to the user interface of all workstation in the family. Modern database and wide area communications technologies combine to support not only large hospital PACS but also outlying clinics and smaller facilities. Basic RIS functionality is integrated into the PACS database for convenience and data integrity.

  9. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    PubMed Central

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  10. Investigating the effectiveness of many-core network processors for high performance cyber protection systems. Part I, FY2011.

    SciTech Connect

    Wheeler, Kyle Bruce; Naegle, John Hunt; Wright, Brian J.; Benner, Robert E., Jr.; Shelburg, Jeffrey Scott; Pearson, David Benjamin; Johnson, Joshua Alan; Onunkwo, Uzoma A.; Zage, David John; Patel, Jay S.

    2011-09-01

    This report documents our first year efforts to address the use of many-core processors for high performance cyber protection. As the demands grow for higher bandwidth (beyond 1 Gbits/sec) on network connections, the need to provide faster and more efficient solution to cyber security grows. Fortunately, in recent years, the development of many-core network processors have seen increased interest. Prior working experiences with many-core processors have led us to investigate its effectiveness for cyber protection tools, with particular emphasis on high performance firewalls. Although advanced algorithms for smarter cyber protection of high-speed network traffic are being developed, these advanced analysis techniques require significantly more computational capabilities than static techniques. Moreover, many locations where cyber protections are deployed have limited power, space and cooling resources. This makes the use of traditionally large computing systems impractical for the front-end systems that process large network streams; hence, the drive for this study which could potentially yield a highly reconfigurable and rapidly scalable solution.

  11. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    PubMed

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  12. Toward a Performance/Resilience Tool for Hardware/Software Co-Design of High-Performance Computing Systems

    SciTech Connect

    Engelmann, Christian; Naughton, III, Thomas J

    2013-01-01

    xSim is a simulation-based performance investigation toolkit that permits running high-performance computing (HPC) applications in a controlled environment with millions of concurrent execution threads, while observing application performance in a simulated extreme-scale system for hardware/software co-design. The presented work details newly developed features for xSim that permit the injection of MPI process failures, the propagation/detection/notification of such failures within the simulation, and their handling using application-level checkpoint/restart. These new capabilities enable the observation of application behavior and performance under failure within a simulated future-generation HPC system using the most common fault handling technique.

  13. Development of a High-performance Optical System and Fluorescent Converters for High-resolution Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Sakai, T.; Yasuda, R.; Iikura, H.; Nojima, T.; Matsubayashi, M.

    Two novel devices for use in neutron imaging technique are introduced. The first one is a high-performance optical lens for video camera systems. The lens system has a magnification of 1:1 and an F value of 3. The optical resolution is less than 5 μm. The second device is a high-resolution fluorescent plate that converts neutrons into visible light. The fluorescent converter material consists of a mixture of 6LiF and ZnS(Ag) fine powder, and the thickness of the converter is material is as little as 15 μm. The surface of the plate is coated with a 1 μm-thick gadolinium oxide layer. This layer is optically transparent and acts as an electron emitter for neutron detection. Our preliminary results show that the developed optical lens and fluorescent converter plates are very promising for high-resolution neutron imaging.

  14. High Performance, Dependable Multiprocessor

    NASA Technical Reports Server (NTRS)

    Ramos, Jeremy; Samson, John R.; Troxel, Ian; Subramaniyan, Rajagopal; Jacobs, Adam; Greco, James; Cieslewski, Grzegorz; Curreri, John; Fischer, Michael; Grobelny, Eric; George, Alan; Aggarwal, Vikas; Patel, Minesh; Some, Raphael

    2006-01-01

    With the ever increasing demand for higher bandwidth and processing capacity of today's space exploration, space science, and defense missions, the ability to efficiently apply commercial-off-the-shelf (COTS) processors for on-board computing is now a critical need. In response to this need, NASA's New Millennium Program office has commissioned the development of Dependable Multiprocessor (DM) technology for use in payload and robotic missions. The Dependable Multiprocessor technology is a COTS-based, power efficient, high performance, highly dependable, fault tolerant cluster computer. To date, Honeywell has successfully demonstrated a TRL4 prototype of the Dependable Multiprocessor [I], and is now working on the development of a TRLS prototype. For the present effort Honeywell has teamed up with the University of Florida's High-performance Computing and Simulation (HCS) Lab, and together the team has demonstrated major elements of the Dependable Multiprocessor TRLS system.

  15. Small Delay and High Performance AD/DA Converters of Lease Circuit System for AM&FM Broadcast

    NASA Astrophysics Data System (ADS)

    Takato, Kenji; Suzuki, Dai; Ishii, Takashi; Kobayashi, Masato; Yamada, Hirokazu; Amano, Shigeru

    Many AM&FM broadcasting stations in Japan are connected by the leased circuit system of NTT. Small delay and high performance AD/DA converter was developed for the system. The system was designed based on ITU-T J.41 Recommendation (384kbps), the transmission signal is 11bit-32 kHz where the Gain-frequency characteristics between 40Hz to 15kHz have to be quite flat. The ΔΣAD/DA converter LSIs for audio application in the market today realize very high performance. However the performance is not enough for the leased circuit system. We found that it is not possible to meet the delay and Gain-frequency requirements only by using ΔΣAD/DA converter LSI in normal operation, because 15kHz the highest frequency and 16kHz Nyquist frequency are too close, therefore there are aliasing around Nyquist frequency. In this paper, we designed AD/DA architecture having small delay (1msec) and sharp cut off LPF (100dB attenuation at 16kHz, and 1500dB/Oct from 15kHz to 16kHz) by operating ΔΣAD/DA converter LSIs over-sampling rate such as 128kHz and by adding custom LPF designed Infinite Impulse Response (IIR) filter. The IIR filter is a 16th order elliptic type and it is consist of eight biquad filters in series. We described how to evaluate the stability of IIR filter theoretically by calculating frequency response, Pole and Zero Layout and impulse response of each biquad filter, and experimentally by adding overflow detection circuit on each filters and input overlord signal.

  16. Advanced real-time bus system for concurrent data paths used in high-performance image processing

    NASA Astrophysics Data System (ADS)

    Brodersen, Jorg; Palkovich, Roland; Landl, Dieter; Furtler, Johannes; Dulovits, Martin

    2004-05-01

    In this paper we present a new bus protocol satisfying extreme real time demands. It has been applied to a high performance quality inspection system which can involve up to eight sensors of various types. Thanks to the modular configuration this multi-sensor inspection system acts on the outside as a single sensor image processing system. In general, image processing systems comprise three basic functions (i) image acquisition, (ii) image processing and (iii) output of processed data. The data transfers for these three fundamental functions can be accomplished either by individual bus systems or by a single bus. In case of using a single bus the system complexity of the implementation, i.e. Development of protocols, hardware employment and EMC technical considerations, is far smaller. An important goal of the new protocol design is to support extremely fast communication between individual processing modules. For example, the input data (image acquisition) is transferred in real time to individual processing modules. Concurrent to this communication the processed data are being transferred to the output module. Therefore, the key function of this protocol is to realize concurrent data paths (data rates over 1.2 Gbit/s) by using principles of pipeline architectures and methods of time division multiplex. Moreover, the new bus protocol enables concurrent data transfers via a single bus system. In this paper the function of the new bus protocol including hardware layout and innovative bus arbiter are described in details.

  17. Coal-fired high performance power generating system. Quarterly progress report, October 1, 1994--December 31, 1994

    SciTech Connect

    1995-08-01

    This report covers work carried out under Task 3, Preliminary R and D, under contract DE-AC22-92PC91155, {open_quotes}Engineering Development of a Coal-Fired High Performance Power Generation System{close_quotes} between DOE Pittsburgh Energy Technology Center and United Technologies Research Center. The goals of the program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of (1) > 47% thermal efficiency; (2) NO{sub x}, SO{sub x} and particulates {<=}25% NSPS; (3) cost {>=}65% of heat input; (4) all solid wastes benign. In our design consideration, we have tried to render all waste streams benign and if possible convert them to a commercial product. It appears that vitrified slag has commercial values. If the flyash is reinjected through the furnace, along with the dry bottom ash, then the amount of the less valuable solid waste stream (ash) can be minimized. A limitation on this procedure arises if it results in the buildup of toxic metal concentrations in either the slag, the flyash or other APCD components. We have assembled analytical tools to describe the progress of specific toxic metals in our system. The outline of the analytical procedure is presented in the first section of this report. The strengths and corrosion resistance of five candidate refractories have been studied in this quarter. Some of the results are presented and compared for selected preparation conditions (mixing, drying time and drying temperatures). A 100 hour pilot-scale stagging combustor test of the prototype radiant panel is being planned. Several potential refractory brick materials are under review and five will be selected for the first 100 hour test. The design of the prototype panel is presented along with some of the test requirements.

  18. High performance steam development

    SciTech Connect

    Duffy, T.; Schneider, P.

    1995-12-31

    DOE has launched a program to make a step change in power plant to 1500 F steam, since the highest possible performance gains can be achieved in a 1500 F steam system when using a topping turbine in a back pressure steam turbine for cogeneration. A 500-hour proof-of-concept steam generator test module was designed, fabricated, and successfully tested. It has four once-through steam generator circuits. The complete HPSS (high performance steam system) was tested above 1500 F and 1500 psig for over 102 hours at full power.

  19. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  20. Enabling Interoperation of High Performance, Scientific Computing Applications: Modeling Scientific Data with the Sets & Fields (SAF) Modeling System

    SciTech Connect

    Miller, M C; Reus, J F; Matzke, R P; Arrighi, W J; Schoof, L A; Hitt, R T; Espen, P K; Butler, D M

    2001-02-07

    This paper describes the Sets and Fields (SAF) scientific data modeling system. It is a revolutionary approach to interoperation of high performance, scientific computing applications based upon rigorous, math-oriented data modeling principles. Previous technologies have required all applications to use the same data structures and/or meshes to represent scientific data or lead to an ever expanding set of incrementally different data structures and/or meshes. SAF addresses this problem by providing a small set of mathematical building blocks--sets, relations and fields--out of which a wide variety of scientific data can be characterized. Applications literally model their data by assembling these building blocks. A short historical perspective, a conceptual model and an overview of SAF along with preliminary results from its use in a few ASCI codes are discussed.

  1. Evaluation of C/C-SiC Composites as Potential Candidate Materials for High Performance Braking Systems

    NASA Astrophysics Data System (ADS)

    Saptono Duryat, Rahmat

    2016-05-01

    This paper is aimed at evaluating the characteristic and performance of C/C-SiC composites as potential candidate materials for high performance braking system. A set of material specifications had been derived from specific engineering design requirements. Analysis was performed by formulating the function(s), constraint(s), and objective(s) of design and materials selection. Function of a friction material is chiefly to provide friction, absorb and dissipate energy. It is done while withstanding load and maintaining the structural adequacy and characteristic of tribology at high temperature. Objective of the material selection and design is to maximize the absorption and dissipation of energy and to minimize weight and cost. Candidate materials were evaluated based on their friction and wear, thermal capacity and conductivity, structural properties, manufacturing properties, and densities. The present paper provides a state of the art example on how materials - function - geometry - design, are all interrelated.

  2. High-performance SPME/AP MALDI system for high-throughput sampling and determination of peptides.

    PubMed

    Wang, Yan; Schneider, Bradley B; Covey, Thomas R; Pawliszyn, Janusz

    2005-12-15

    This paper presents the performance characteristics for a new multiplexed solid-phase microextraction/atmospheric pressure matrix-assisted laser desorption/ionization (SPME/AP MALDI) source configuration for a hybrid quadrupole-linear ion trap instrument. The results demonstrate that thorough optimization of parameters such as SPME coating material, optics configurations, extraction solvents, and fiber capacity provides dramatic sensitivity improvements (>1000x) over previous reports in the literature. The multiplexed SPME plate is capable of simultaneous extraction from 16 different wells on a multiwell plate, eliminating the need for extensive sample preparation. Subfemtomole sensitivity is demonstrated for peptide standards and protein digests with run-run reproducibility ranging from approximately 13 to 31%. This high-performance SPME/AP MALDI system shows potential for high-throughput extraction from biological samples. PMID:16351160

  3. Development of a high-performance gantry system for a new generation of optical slope measuring profilers

    NASA Astrophysics Data System (ADS)

    Assoufid, Lahsen; Brown, Nathan; Crews, Dan; Sullivan, Joseph; Erdmann, Mark; Qian, Jun; Jemian, Pete; Yashchuk, Valeriy V.; Takacs, Peter Z.; Artemiev, Nikolay A.; Merthe, Daniel J.; McKinney, Wayne R.; Siewert, Frank; Zeschke, Thomas

    2013-05-01

    A new high-performance metrology gantry system has been developed within the scope of collaborative efforts of optics groups at the US Department of Energy synchrotron radiation facilities as well as the BESSY-II synchrotron at the Helmholtz Zentrum Berlin (Germany) and the participation of industrial vendors of x-ray optics and metrology instrumentation directed to create a new generation of optical slope measuring systems (OSMS) [1]. The slope measurement accuracy of the OSMS is expected to be <50 nrad, which is strongly required for the current and future metrology of x-ray optics for the next generation of light sources. The fabricated system was installed and commissioned (December 2012) at the Advanced Photon Source (APS) at Argonne National Laboratory to replace the aging APS Long Trace Profiler (APS LTP-II). Preliminary tests were conducted (in January and May 2012) using the optical system configuration of the Nanometer Optical Component Measuring Machine (NOM) developed at Helmholtz Zentrum Berlin (HZB)/BESSY-II. With a flat Si mirror that is 350 mm long and has 200 nrad rms nominal slope error over a useful length of 300 mm, the system provides a repeatability of about 53 nrad. This value corresponds to the design performance of 50 nrad rms accuracy for inspection of ultra-precise flat optics.

  4. The Open Cloud Testbed: Supporting Open Source Cloud Computing Systems Based on Large Scale High Performance, Dynamic Network Services

    NASA Astrophysics Data System (ADS)

    Grossman, Robert; Gu, Yunhong; Sabala, Michal; Bennet, Colin; Seidman, Jonathan; Mambratti, Joe

    Recently, a number of cloud platforms and services have been developed for data intensive computing, including Hadoop, Sector, CloudStore (formerly KFS), HBase, and Thrift. In order to benchmark the performance of these systems, to investigate their interoperability, and to experiment with new services based on flexible compute node and network provisioning capabilities, we have designed and implemented a large scale testbed called the Open Cloud Testbed (OCT). Currently OCT has 120 nodes in 4 data centers: Baltimore, Chicago (two locations), and San Diego. In contrast to other cloud testbeds, which are in small geographic areas and which are based on commodity Internet services, the OCT is a wide area testbed and the 4 data centers are connected with a high performance 10Gb/s network, based on a foundation of dedicated lightpaths. This testbed can address the requirements of extremely large data streams that challenge other types of distributed infrastructure. We have also developed several utilities to support the development of cloud computing systems and services, including novel node and network provisioning services, a monitoring system, and an RPC system. In this paper, we describe the OCT concepts, architecture, infrastructure, a few benchmarks that were developed for this platform, interoperability studies, and results.

  5. High Performance, Low Operating Voltage n-Type Organic Field Effect Transistor Based on Inorganic-Organic Bilayer Dielectric System

    NASA Astrophysics Data System (ADS)

    Dey, A.; Singh, A.; Kalita, A.; Das, D.; Iyer, P. K.

    2016-04-01

    The performance of organic field-effect transistors (OFETs) fabricated utilizing vacuum deposited n-type conjugated molecule N,N’-Dioctadecyl-1,4,5,8-naphthalenetetracarboxylic diimide (NDIOD2) were investigated using single and bilayer dielectric system over a low-cost glass substrate. Single layer device structure consists of Poly (vinyl alcohol) (PVA) as the dielectric material whereas the bilayer systems contain two different device configuration namely aluminum oxide/Poly (vinyl alcohol) (Al2O3/PVA) and aluminum oxide/Poly (methyl mefhacrylate) (Al2O3/PMMA) in order to reduce the operating voltage and improve the device performance. It was observed that the devices with Al2O3/PMMA bilayer dielectric system and top contact aluminum electrodes exhibit excellent n-channel behaviour under vacuum compared to the other two structures with electron mobility value of 0.32 cm2/Vs, threshold voltages ~1.8 V and current on/off ratio ~104, operating under a very low voltage (6 V). These devices demonstrate highly stable electrical behaviour under multiple scans and low threshold voltage instability in vacuum condition even after 7 days than the Al2O3/PVA device structure. This low operating voltage, high performance OTFT device with bilayer dielectric system is expected to have diverse applications in the next generation of OTFT technologies.

  6. Conceptual design of a self-deployable, high performance parabolic concentrator for advanced solar-dynamic power systems

    NASA Technical Reports Server (NTRS)

    Dehne, Hans J.

    1991-01-01

    NASA has initiated technology development programs to develop advanced solar dynamic power systems and components for space applications beyond 2000. Conceptual design work that was performed is described. The main efforts were the: (1) conceptual design of self-deploying, high-performance parabolic concentrator; and (2) materials selection for a lightweight, shape-stable concentrator. The deployment concept utilizes rigid gore-shaped reflective panels. The assembled concentrator takes an annular shape with a void in the center. This deployable concentrator concept is applicable to a range of solar dynamic power systems of 25 kW sub e to in excess of 75 kW sub e. The concept allows for a family of power system sizes all using the same packaging and deployment technique. The primary structural material selected for the concentrator is a polyethyl ethylketone/carbon fiber composite also referred to as APC-2 or Vitrex. This composite has a nearly neutral coefficient of thermal expansion which leads to shape stable characteristics under thermal gradient conditions. Substantial efforts were undertaken to produce a highly specular surface on the composite. The overall coefficient of thermal expansion of the composite laminate is near zero, but thermally induced stresses due to micro-movement of the fibers and matrix in relation to each other cause the surface to become nonspecular.

  7. A high performance system to study the influence of temperature in on-line solid-phase extraction capillary electrophoresis.

    PubMed

    Tascon, Marcos; Benavente, Fernando; Sanz-Nebot, Victoria; Gagliardi, Leonardo G

    2015-03-10

    A novel high performance system to control the temperature of the microcartridge in on-line solid phase extraction capillary electrophoresis (SPE-CE) is introduced. The mini-device consists in a thermostatic bath that fits inside of the cassette of any commercial CE instrument, while its temperature is controlled from an external circuit of liquid connecting three different water baths. The circuits are controlled from a switchboard connected to an array of electrovalves that allow to rapidly alternate the water circulation through the mini-thermostatic-bath between temperatures from 5 to 90 °C. The combination of the mini-device and the forced-air thermostatization system of the commercial CE instrument allows to optimize independently the temperature of the sample loading, the clean-up, the analyte elution and the electrophoretic separation steps. The system is used to study the effect of temperature on the C18-SPE-CE analysis of the opioid peptides, Dynorphin A (Dyn A), Endomorphin1 (END) and Met-enkephalin (MET), in both standard solutions and in spiked plasma samples. Extraction recoveries demonstrated to depend, with a non-monotonous trend, on the microcartridge temperature during the sample loading and became maximum at 60 °C. Results prove the potential of temperature control to further enhance sensitivity in SPE-CE when analytes are thermally stable. PMID:25732315

  8. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1996-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA-High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high order characteristics of the system. In this paper, only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles at attack : 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  9. Closed-Loop System Identification Experience for Flight Control Law and Flying Qualities Evaluation of a High Performance Fighter Aircraft

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.

    1999-01-01

    This paper highlights some of the results and issues associated with estimating models to evaluate control law design methods and design criteria for advanced high performance aircraft. Experimental fighter aircraft such as the NASA High Alpha Research Vehicle (HARV) have the capability to maneuver at very high angles of attack where nonlinear aerodynamics often predominate. HARV is an experimental F/A-18, configured with thrust vectoring and conformal actuated nose strakes. Identifying closed-loop models for this type of aircraft can be made difficult by nonlinearities and high-order characteristics of the system. In this paper only lateral-directional axes are considered since the lateral-directional control law was specifically designed to produce classical airplane responses normally expected with low-order, rigid-body systems. Evaluation of the control design methodology was made using low-order equivalent systems determined from flight and simulation. This allowed comparison of the closed-loop rigid-body dynamics achieved in flight with that designed in simulation. In flight, the On Board Excitation System was used to apply optimal inputs to lateral stick and pedals at five angles of attack: 5, 20, 30, 45, and 60 degrees. Data analysis and closed-loop model identification were done using frequency domain maximum likelihood. The structure of the identified models was a linear state-space model reflecting classical 4th-order airplane dynamics. Input time delays associated with the high-order controller and aircraft system were accounted for in data preprocessing. A comparison of flight estimated models with small perturbation linear design models highlighted nonlinearities in the system and indicated that the estimated closed-loop rigid-body dynamics were sensitive to input amplitudes at 20 and 30 degrees angle of attack.

  10. Determination of salbutamol in human plasma and urine by high-performance liquid chromatography with a coulometric electrode array system.

    PubMed

    Zhang, X Z; Gan, Y R; Zhao, F N

    2004-01-01

    A method is developed to determine salbutamol in human plasma and urine using high-performance liquid chromatography (HPLC) with a coulometric electrode array system, based on the electrochemical behavior of salbutamol at graphite electrode. The mobile phase component A is 30 mM sodium dihydroxy phosphate-30 mM triethylamine and is adjusted to pH 6.0 with 20% phosphate acid. The mobile phase component B is methanol. The optimized mobile phase composition was A and B in the proportion of 90:10 (v/v). Paracetamol is selected as the external standard. The human plasma and urine samples are pretreated using solid-phase extraction cartridges (Sep-Pak Silica), and the eluting solution is monitored by the coulometric electrode array system. The electrode potentials are set at 300, 400, 550, and 650 mV, respectively. Calibration curves show good linearity, and the recovery of salbutamol proves to be constant and unaffected by the concentration of the drug. This method, developed using HPLC-electrochemical detection, is reproducible and sensitive enough for the determination of salbutamol in human plasma and urine. PMID:15189600

  11. Synthesis and Characterization of High Performance Polyimides Containing the Bicyclo(2.2.2)oct-7-ene Ring System

    NASA Technical Reports Server (NTRS)

    Alvarado, M.; Harruna, I. I.; Bota, K. B.

    1997-01-01

    Due to the difficulty in processing polyimides with high temperature stability and good solvent resistance, we have synthesized high performance polyimides with bicyclo(2.2.2)-oct-7-ene ring system which can easily be fabricated into films and fibers and subsequently converted to the more stable aromatic polyimides. In order to improve processability, we prepared two polyimides by reacting 1,4-phenylenediamine and 1,3phenylediamine with bicyclo(2.2.2)-7-octene-2,3,5,6-tetracarboxylic dianhydride. The polyimides were characterized by FTIR, FTNMR, solubility and thermal analysis. Thermogravimetric analysis (TGA) showed that the 1,4-phenylenediamine and 1,3-phenylenediamine containing polyimides were stable up to 460 and 379 C, respectively under nitrogen atmosphere. No melting transitions were observed for both polyimides. The 1,4-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, methane sulfonic acid and soluble in sulfuric acid at room temperature. The 1,3-phenylenediamine containing polyimide is partially soluble in dimethyl sulfoxide, tetramethyl urea, N,N-dimethyl acetamide and soluble in methane sulfonic acid and sulfuric acid.

  12. Determination of Oxyclozanide in Beef and Milk using High-Performance Liquid Chromatography System with UV Detector

    PubMed Central

    Jo, Kyul; Cho, Hee-Jung; Yi, Hee; Cho, Soo-Min; Park, Jin-A; Kwon, Chan-Hyeok; Park, Hee-Ra; Kwon, Ki-Sung

    2011-01-01

    This study was developed and validated for the determination of oxyclozanide residue concentrations in beef and commercial milk, using high-performance liquid chromatography system. Oxyclozanide was successfully separated on a reverse phase column (Xbridge-C18, 4.6×250 mm, 5 µm) with a mobile phase composed of acetonitrile and 0.1% phosphoric acid (60:40, v/v%). This analytical procedure involved a deproteinization process using acetonitrile for beef and 2% formic acid in acetonitrile for commercial milk, dehydration by adding sodium sulfate to the liquid analytical sample, and a defatting process using n-hexane; after these steps, the extract was exposed to a stream of nitrogen dryness. The final extracted sample was dissolved in the mobile phase and filtered using a 0.45 µm syringe filter. This method had good selectivity and recovery (70.70±7.90-110.79±14.95%) from the matrices. The LOQs ranged from 9.7 to 9.8 µg/kg for beef and commercial milk. The recoveries met the standards set by the CODEX guideline. PMID:21826158

  13. Easy to use uncooled ¼ VGA 17 µm FPA development for high performance compact and low-power systems

    NASA Astrophysics Data System (ADS)

    Robert, P.; Tissot, JL.; Pochic, D.; Gravot, V.; Bonnaire, F.; Clerambault, H.; Durand, A.; Tinnes, S.

    2012-06-01

    The high level of accumulated expertise by ULIS and CEA/LETI on uncooled microbolometers made from amorphous silicon enables ULIS to develop ¼ VGA IRFPA formats with 17μm pixel-pitch to enable the development of small power, small weight (SWAP) and high performance IR systems. ROIC architecture will be described where innovations are widely on-chip implemented to enable an easier operation by the user. The detector configuration (integration time, windowing, gain, scanning direction...), is driven by a standard I²C link. Like most of the visible arrays, the detector adopts the HSYNC/VSYNC free-run mode of operation driven with only one master clock (MC) supplied to the ROIC which feeds back pixel, line and frame synchronizations. On-chip PROM memory for customer operational condition storage is available for detector characteristics. Low power consumption has been taken into account and less than 60 mW is possible in analog mode at 60 Hz and < 175 mW in digital mode (14 bits). A wide electrical dynamic range (2.4V) is maintained despite the use of advanced CMOS node. The specific appeal of this unit lies in the high uniformity and easy operation it provides. The reduction of the pixel-pitch turns this TEC-less ¼ VGA array into a product well adapted for high resolution and compact systems. NETD of 35 mK and thermal time constant of 10 ms have been measured leading to 350 mK.ms figure of merit. We insist on NETD trade-off with wide thermal dynamic range, as well as the high characteristics uniformity and pixel operability, achieved thanks to the mastering of the amorphous silicon technology coupled with the ROIC design. This technology node associated with advanced packaging technique, paves the way to compact low power system.

  14. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new

  15. High-Performance SiC/SiC Ceramic Composite Systems Developed for 1315 C (2400 F) Engine Components

    NASA Technical Reports Server (NTRS)

    DiCarlo, James A.; Yun, Hee Mann; Morscher, Gregory N.; Bhatt, Ramakrishna T.

    2004-01-01

    As structural materials for hot-section components in advanced aerospace and land-based gas turbine engines, silicon carbide (SiC) ceramic matrix composites reinforced by high performance SiC fibers offer a variety of performance advantages over current bill-of-materials, such as nickel-based superalloys. These advantages are based on the SiC/SiC composites displaying higher temperature capability for a given structural load, lower density (approximately 30- to 50-percent metal density), and lower thermal expansion. These properties should, in turn, result in many important engine benefits, such as reduced component cooling air requirements, simpler component design, reduced support structure weight, improved fuel efficiency, reduced emissions, higher blade frequencies, reduced blade clearances, and higher thrust. Under the NASA Ultra-Efficient Engine Technology (UEET) Project, much progress has been made at the NASA Glenn Research Center in identifying and optimizing two highperformance SiC/SiC composite systems. The table compares typical properties of oxide/oxide panels and SiC/SiC panels formed by the random stacking of balanced 0 degrees/90 degrees fabric pieces reinforced by the indicated fiber types. The Glenn SiC/SiC systems A and B (shaded area of the table) were reinforced by the Sylramic-iBN SiC fiber, which was produced at Glenn by thermal treatment of the commercial Sylramic SiC fiber (Dow Corning, Midland, MI; ref. 2). The treatment process (1) removes boron from the Sylramic fiber, thereby improving fiber creep, rupture, and oxidation resistance and (2) allows the boron to react with nitrogen to form a thin in situ grown BN coating on the fiber surface, thereby providing an oxidation-resistant buffer layer between contacting fibers in the fabric and the final composite. The fabric stacks for all SiC/SiC panels were provided to GE Power Systems Composites for chemical vapor infiltration of Glenn designed BN fiber coatings and conventional SiC matrices

  16. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NASA Astrophysics Data System (ADS)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  17. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  18. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted

  19. Inverse opal-inspired, nanoscaffold battery separators: a new membrane opportunity for high-performance energy storage systems.

    PubMed

    Kim, Jung-Hwan; Kim, Jeong-Hoon; Choi, Keun-Ho; Yu, Hyung Kyun; Kim, Jong Hun; Lee, Joo Sung; Lee, Sang-Young

    2014-08-13

    The facilitation of ion/electron transport, along with ever-increasing demand for high-energy density, is a key to boosting the development of energy storage systems such as lithium-ion batteries. Among major battery components, separator membranes have not been the center of attention compared to other electrochemically active materials, despite their important roles in allowing ionic flow and preventing electrical contact between electrodes. Here, we present a new class of battery separator based on inverse opal-inspired, seamless nanoscaffold structure ("IO separator"), as an unprecedented membrane opportunity to enable remarkable advances in cell performance far beyond those accessible with conventional battery separators. The IO separator is easily fabricated through one-pot, evaporation-induced self-assembly of colloidal silica nanoparticles in the presence of ultraviolet (UV)-curable triacrylate monomer inside a nonwoven substrate, followed by UV-cross-linking and selective removal of the silica nanoparticle superlattices. The precisely ordered/well-reticulated nanoporous structure of IO separator allows significant improvement in ion transfer toward electrodes. The IO separator-driven facilitation of the ion transport phenomena is expected to play a critical role in the realization of high-performance batteries (in particular, under harsh conditions such as high-mass-loading electrodes, fast charging/discharging, and highly polar liquid electrolyte). Moreover, the IO separator enables the movement of the Ragone plot curves to a more desirable position representing high-energy/high-power density, without tailoring other battery materials and configurations. This study provides a new perspective on battery separators: a paradigm shift from plain porous films to pseudoelectrochemically active nanomembranes that can influence the charge/discharge reaction. PMID:24979037

  20. High-performance two-axis gimbal system for free space laser communications onboard unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    Locke, Michael; Czarnomski, Mariusz; Qadir, Ashraf; Setness, Brock; Baer, Nicolai; Meyer, Jennifer; Semke, William H.

    2011-03-01

    A custom designed and manufactured gimbal with a wide field-of-view and fast response time is developed. This enhanced custom design is a 24 volt system with integrated motor controllers and drivers which offers a full 180o fieldof- view in both azimuth and elevation; this provides a more continuous tracking capability as well as increased velocities of up to 479° per second. The addition of active high-frequency vibration control, to complement the passive vibration isolation system, is also in development. The ultimate goal of this research is to achieve affordable, reliable, and secure air-to-air laser communications between two separate remotely piloted aircraft. As a proof-of-concept, the practical implementation of an air-to-ground laserbased video communications payload system flown by a small Unmanned Aerial Vehicle (UAV) will be demonstrated. A numerical tracking algorithm has been written, tested, and used to aim the airborne laser transmitter at a stationary ground-based receiver with known GPS coordinates; however, further refinement of the tracking capabilities is dependent on an improved gimbal design for precision pointing of the airborne laser transmitter. The current gimbal pointing system is a two-axis, commercial-off-the-shelf component, which is limited in both range and velocity. The current design is capable of 360o of pan and 78o of tilt at a velocity of 60o per second. The control algorithm used for aiming the gimbal is executed on a PC-104 format embedded computer onboard the payload to accurately track a stationary ground-based receiver. This algorithm autonomously calculates a line-of-sight vector in real-time by using the UAV autopilot's Differential Global Positioning System (DGPS) which provides latitude, longitude, and altitude and Inertial Measurement Unit (IMU) which provides the roll, pitch, and yaw data, along with the known Global Positioning System (GPS) location of the ground-based photodiode array receiver.

  1. [A systematic screening and identification method for 29 central nervous system drugs in body fluid by high performance capillary electrophoresis].

    PubMed

    Wu, H F; Guan, F Y; Luo, Y

    1997-05-01

    A systematic screening method has been developed for the detection of 29 central nervous system (CNS) drugs in human plasma, urine and gastric juice by high performance capillary electrophoresis (HPCE). The first step is sample preparation. The patient's or normal human plasma (0.5 ml) spiked with CNS drugs was extracted with 2 x 4 ml dichloromethane, while 2 ml of patient's or spiked urine was extracted with 2 x 6 ml chloroform. The combined extract from plasma or urine was evaporated to dryness in a rotation evaporator at 35 degrees C. The residue was dissolved in 100 microliters methanol and subsequently 400 microliters of redistilled water was added. The patient gastric juice (3 ml) was centrifuged at 2,000 r.min-1 for 5 min. The supernatant was filtered through 0.45 micron microporous membrane for injection onto capillary columns. The second step was to perform CZE separation in acidic buffer composed of 30 mmol.L-1(NH4)3PO4(pH 2.50) and 10% acetonitrile (condition A). Most of the benzodiazepines (diazepam, nitrazepam, chlordiazepoxide, flurazepam, extazolam, alprazolam) and methaqualone were baseline separated and detected at 5-13 min, while thiodiphenylamines showed group peaks at 3-5 min and barbiturates migrate with electroosmotic fluid (EOF) together. The third step is to separate the drugs in basic buffer constituted of 70 mmol.L-1 Na2HPO4(pH 8.60) and 30% acetonitrile (condition B). The thiodiphenylamines and some other basic drugs could be well separated, which include thihexyphenidyl, imipramine, amitriptyline, diphenhydramine, chlorpromazine, doxepin, chlorprothixene, promethazine and flurazepam, while the rest of the CNS drugs did not interfere with the separation. The last step was to separate the drugs by micellar electrokinetic chromatography (MEKC) in such a buffer as 70 mmol.L-1 SDS plus 15 mmol.L-1 Na2HPO4 (pH 7.55) and 5% methanol (condition C). Barbiturates (barbital, phenobarbital, methylphenobarbital, amobarbital, thiopental, pentobarbital

  2. Engineering development of coal-fired high performance power systems, Phase II and Phase III. Quarter progress report, April 1, 1996--June 30, 1996

    SciTech Connect

    1996-11-01

    Work is presented on the development of a coal-fired high performance power generation system by the year 2000. This report describes the design of the air heater, duct heater, system controls, slag viscosity, and design of a quench zone.

  3. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  4. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  5. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  6. CLUPI, a high-performance imaging system on the rover of the 2018 mission to discover biofabrics on Mars

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Westall, F.; Hofmann, B. A.; Spray, J. G.; Cockell, C.; Kempe, S.; Griffiths, A. D.; Coradini, A.; Colangeli, L.; Koschny, D.; Pullan, D.; Föllmi, K.; Diamond, L.; Josset, M.; Javaux, E.; Esposito, F.

    2011-10-01

    The scientific objectives of the 2018 ExoMars rover mission are to search for traces of past or present life and to characterise the near-sub surface. Both objectives require study of the rock/regolith materials in terms of structure, textures, mineralogy, and elemental and organic composition. The 2018 ExoMars rover payload consists of a suite of complementary instruments designed to reach these objectives. CLUPI, the high-performance colour close up imager, on board the 2018 ExoMars Rover plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. CLUPI is a powerful, highly integrated miniaturized (<700g) low-power robust imaging system, able to operate at very low temperatures (-120°C). CLUPI has a working distance from 10cm to infinite providing outstanding pictures with a color detector of 2652x1768. At 10cm, the resolution is 7 micrometer/pixel in color. The optical-mechanical interface is a smart assembly in titanium that can sustain a wide temperature range. The concept benefits from well-proven heritage: Proba, Rosetta, MarsExpress and Smart-1 missions… In a typical field scenario, the geologist will use his/her eyes to make an overview of an area and the outcrops within it to determine sites of particular interest for more detailed study. In the ExoMars scenario, the PanCam wide angle cameras (WACS) will be used for this task. After having made a preliminary general evaluation, the geologist will approach a particular outcrop for closer observation of structures at the decimetre to subdecimeter scale (ExoMars' High Resolution Camera) before finally getting very close up to the surface with a hand lens (ExoMars' CLUPI), and/or taking a hand specimen, for detailed observation of textures and minerals. Using structural, textural and preliminary compositional analysis, the geologist identifies the materials and makes a decision as to whether they are of

  7. Use of high-performance computers, FEA and the CAVE automatic virtual environment for collaborative design of complex systems

    SciTech Connect

    Plaskacz, E.J.; Kulak, R.F.

    1996-03-01

    Concurrent, interactive engineering design and analysis has the potential for substantially reducing product development time and enhancing US competitiveness. Traditionally, engineering design has involved running engineering analysis codes to simulate and evaluate the response of a product or process, writing the output data to file, and viewing or ``post-processing`` the results at a later time. The emergence of high-performance computer architectures, virtual reality, and advanced telecommunications in the mid 90`s promises to dramatically alter the way designers, manufacturers, engineers and scientists will do their work.

  8. High Performance Work and Learning Systems: Crafting a Worker-Centered Approach. Proceedings of a Conference (Washington, D.C., September 1991).

    ERIC Educational Resources Information Center

    Marschall, Daniel, Ed.

    A consensus that unions must develop coherent and comprehensive policies on new work systems and continuous learning in order to guide local activities, was the central theme of this conference on the interrelated issues of the high performance work organization. These proceedings include the following presentations: "Labor's Stake in High…

  9. Validation of a high-performance liquid chromatography method for the determination of (-)-alpha-bisabolol from particulate systems.

    PubMed

    São Pedro, André; Detoni, Cássia; Ferreira, Domingos; Cabral-Albuquerque, Elaine; Sarmento, Bruno

    2009-09-01

    A reversed-phase high performance liquid chromatography method has been developed and validated for determination and quantitation of the natural sesquiterpene (-)-alpha-bisabolol. Furthermore the application of the method was done by characterization of chitosan milispheres and liposomes entrapping Zanthoxylum tingoassuiba essential oil, which contains appreciable amount of (-)-alpha-bisabolol. A reversed-phase C(18) column and gradient elution was used with the mobile phase composed of (A) acetonitrile-water-phosphoric acid (19:80:1) and (B) acetonitrile. The eluent was pumped at a flow rate of 0.8 mL/min with UV detection at 200 nm. In the range 0.02-0.64 mg/mL the assay showed good linearity (R(2 )= 0.9999) and specificity for successful identification and quantitation of (-)-alpha-bisabolol in the essential oil without interfering peaks. The method also showed good reproducibility, demonstrating inter-day and intra-day precision based on relative standard deviation values (up to 3.03%), accuracy (mean recovery of 100.69% +/- 1.05%) and low values of detection and quantitation limits (0.0005 and 0.0016 mg/mL, respectively). The method was also robust for showing a recovery of 98.81% under a change of solvent in standard solutions. The suitability of the method was demonstrated by the successful determination of association efficiency of the (-)-alpha-bisabolol in chitosan milispheres and liposomes. PMID:19353738

  10. Speciation of chromium in environmental samples by dual electromembrane extraction system followed by high performance liquid chromatography.

    PubMed

    Safari, Meysam; Nojavan, Saeed; Davarani, Saied Saeed Hosseiny; Morteza-Najarian, Amin

    2013-07-30

    This study proposes the dual electromembrane extraction followed by high performance liquid chromatography for selective separation-preconcentration of Cr(VI) and Cr(III) in different environmental samples. The method was based on the electrokinetic migration of chromium species toward the electrodes with opposite charge into the two different hollow fibers. The extractant was then complexed with ammonium pyrrolidinedithiocarbamate for HPLC analysis. The effects of analytical parameters including pH, type of organic solvent, sample volume, stirring rate, time of extraction and applied voltage were investigated. The results showed that Cr(III) and Cr(VI) could be simultaneously extracted into the two different hollow fibers. Under optimized conditions, the analytes were quantified by HPLC instrument, with acceptable linearity ranging from 20 to 500 μg L(-1) (R(2) values≥0.9979), and repeatability (RSD) ranging between 9.8% and 13.7% (n=5). Also, preconcentration factors of 21.8-33 that corresponded to recoveries ranging from 31.1% to 47.2% were achieved for Cr(III) and Cr(VI), respectively. The estimated detection limits (S/N ratio of 3:1) were less than 5.4 μg L(-1). Finally, the proposed method was successfully applied to determine Cr(III) and Cr(VI) species in some real water samples. PMID:23856230

  11. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    (LEED®) Green Building Rating System (LEED 2009). The document employs a two-level approach for high performance building at INL. The first level identifies the requirements of the Guiding Principles for Sustainable New Construction and Major Renovations, and the second level recommends which credits should be met when LEED Gold certification is required.

  12. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  13. High Performance Processors for Space Environments: A Subproject of the NASA Exploration Missions Systems Directorate "Radiation Hardened Electronics for Space Environments" Technology Development Program

    NASA Technical Reports Server (NTRS)

    Johnson, M.; Label, K.; McCabe, J.; Powell, W.; Bolotin, G.; Kolawa, E.; Ng, T.; Hyde, D.

    2007-01-01

    Implementation of challenging Exploration Systems Missions Directorate objectives and strategies can be constrained by onboard computing capabilities and power efficiencies. The Radiation Hardened Electronics for Space Environments (RHESE) High Performance Processors for Space Environments project will address this challenge by significantly advancing the sustained throughput and processing efficiency of high-per$ormance radiation-hardened processors, targeting delivery of products by the end of FY12.

  14. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system. PMID:15458092

  15. Identification of high performance and component technology for space electrical power systems for use beyond the year 2000

    NASA Technical Reports Server (NTRS)

    Maisel, James E.

    1988-01-01

    Addressed are some of the space electrical power system technologies that should be developed for the U.S. space program to remain competitive in the 21st century. A brief historical overview of some U.S. manned/unmanned spacecraft power systems is discussed to establish the fact that electrical systems are and will continue to become more sophisticated as the power levels appoach those on the ground. Adaptive/Expert power systems that can function in an extraterrestrial environment will be required to take an appropriate action during electrical faults so that the impact is minimal. Manhours can be reduced significantly by relinquishing tedious routine system component maintenance to the adaptive/expert system. By cataloging component signatures over time this system can set a flag for a premature component failure and thus possibly avoid a major fault. High frequency operation is important if the electrical power system mass is to be cut significantly. High power semiconductor or vacuum switching components will be required to meet future power demands. System mass tradeoffs have been investigated in terms of operating at high temperature, efficiency, voltage regulation, and system reliability. High temperature semiconductors will be required. Silicon carbide materials will operate at a temperature around 1000 K and the diamond material up to 1300 K. The driver for elevated temperature operation is that radiator mass is reduced significantly because of inverse temperature to the fourth power.

  16. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  17. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  18. An open, parallel I/O computer as the platform for high-performance, high-capacity mass storage systems

    NASA Technical Reports Server (NTRS)

    Abineri, Adrian; Chen, Y. P.

    1992-01-01

    APTEC Computer Systems is a Portland, Oregon based manufacturer of I/O computers. APTEC's work in the context of high density storage media is on programs requiring real-time data capture with low latency processing and storage requirements. An example of APTEC's work in this area is the Loral/Space Telescope-Data Archival and Distribution System. This is an existing Loral AeroSys designed system, which utilizes an APTEC I/O computer. The key attributes of a system architecture that is suitable for this environment are as follows: (1) data acquisition alternatives; (2) a wide range of supported mass storage devices; (3) data processing options; (4) data availability through standard network connections; and (5) an overall system architecture (hardware and software designed for high bandwidth and low latency). APTEC's approach is outlined in this document.

  19. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples.

  20. High performance mini-gas chromatography-flame ionization detector system based on micro gas chromatography column.

    PubMed

    Zhu, Xiaofeng; Sun, Jianhai; Ning, Zhanwu; Zhang, Yanni; Liu, Jinhua

    2016-04-01

    Monitoring Volatile organic compounds (VOCs) was a very important measure for preventing environmental pollution, therefore, a mini gas chromatography (GC) flame ionization detector (FID) system integrated with a mini H2 generator and a micro GC column was developed for environmental VOC monitoring. In addition, the mini H2 generator was able to make the system explode from far away due to the abandoned use of a high pressure H2 source. The experimental result indicates that the fabricated mini GC FID system demonstrated high repeatability and very good linear response, and was able to rapidly monitor complicated environmental VOC samples. PMID:27131686

  1. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  2. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  3. A high-performance ultrasonic system for the simultaneous transmission of data and power through solid metal barriers.

    PubMed

    Lawry, Tristan J; Wilt, Kyle R; Ashdown, Jon D; Scarton, Henry A; Saulnier, Gary J

    2013-01-01

    This paper presents a system capable of simultaneous high-power and high-data-rate transmission through solid metal barriers using ultrasound. By coaxially aligning a pair of piezoelectric transducers on opposite sides of a metal wall and acoustically coupling them to the barrier, an acoustic- electric transmission channel is formed which prevents the need for physical penetration. Independent data and power channels are utilized, but they are only separated by 25.4 mm to reduce the system's form factor. Commercial off-the-shelf components and evaluation boards are used to create realtime prototype hardware and the full system is capable of transmitting data at 17.37 Mbps and delivering 50 W of power through a 63.5-mm thick steel wall. A synchronous multi-carrier communication scheme (OFDM) is used to achieve a very high spectral efficiency and to ensure that there is only minor interference between the power and data channels. Also presented is a discussion of potential enhancements that could be made to greatly improve the power and data-rate capabilities of the system. This system could have a tremendous impact on improving safety and preserving structural integrity in many military applications (submarines, surface ships, unmanned undersea vehicles, armored vehicles, planes, etc.) as well as in a wide range of commercial, industrial, and nuclear systems. PMID:23287924

  4. High Performance Measurement System of Large Area Solid-State Track Detector Array for Ultra Heavy Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Kodaira, S.; Doke, T.; Hareyama, M.; Hasebe, N.; Sakurai, K.; Ota, S.; Sato, M.; Yasuda, N.; Nakamura, S.; Kamei, T.; Tawara, H.; Ogura, K.

    The handling of solid-state track detector (SSTD) has been historically required for a long period and many human powers to scan and analyze etch-pits produced on the detector. Because a large area greater than a few m2 detector is required to observe ultraheavy nuclei in galactic cosmic rays, a high speed scanning system is practically important to realize our observation. We have developed the fast automated digital imaging optical microscope (HSP-1000) to scan and analyze the etch-pit produced on the detector, whose image acquisition speed is 50-100 times faster than conventional microscope system. Furthermore, analyzing massive cosmic ray track data produced in extremely large exposed area requires a completely automated multi-sample scanning system. The developed automated system consists of a modified HSP-1000 microscope for image acquisition, a robot arm to replace the sample trays, a magazine station for storing sample trays, and a scanning and analyzing computer to control the whole system. Moreover, since the improvement of thickness measurement accuracy in local area of SSTD will allow us to achieve higher charge and mass resolutions, the new system to measure the SSTD thickness located adjacent to etch-pit in SSTD with an excellent resolution of +/- 0.2 um has been developed.

  5. High-performance computer aided detection system for polyp detection in CT colonography with fluid and fecal tagging

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Wang, Shijun; Kabadi, Suraj; Summers, Ronald M.

    2009-02-01

    CT colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer screening. Computer-aided detection (CAD) of polyps has improved consistency and sensitivity of virtual colonoscopy interpretation and reduced interpretation burden. A CAD system typically consists of four stages: (1) image preprocessing including colon segmentation; (2) initial detection generation; (3) feature selection; and (4) detection classification. In our experience, three existing problems limit the performance of our current CAD system. First, highdensity orally administered contrast agents in fecal-tagging CTC have scatter effects on neighboring tissues. The scattering manifests itself as an artificial elevation in the observed CT attenuation values of the neighboring tissues. This pseudo-enhancement phenomenon presents a problem for the application of computer-aided polyp detection, especially when polyps are submerged in the contrast agents. Second, general kernel approach for surface curvature computation in the second stage of our CAD system could yield erroneous results for thin structures such as small (6-9 mm) polyps and for touching structures such as polyps that lie on haustral folds. Those erroneous curvatures will reduce the sensitivity of polyp detection. The third problem is that more than 150 features are selected from each polyp candidate in the third stage of our CAD system. These high dimensional features make it difficult to learn a good decision boundary for detection classification and reduce the accuracy of predictions. Therefore, an improved CAD system for polyp detection in CTC data is proposed by introducing three new techniques. First, a scale-based scatter correction algorithm is applied to reduce pseudo-enhancement effects in the image pre-processing stage. Second, a cubic spline interpolation method is utilized to accurately estimate curvatures for initial detection generation. Third, a new dimensionality

  6. Low cost, high performance white-light fiber-optic hydrophone system with a trackable working point.

    PubMed

    Ma, Jinyu; Zhao, Meirong; Huang, Xinjing; Bae, Hyungdae; Chen, Yongyao; Yu, Miao

    2016-08-22

    A working-point trackable fiber-optic hydrophone with high acoustic resolution is proposed and experimentally demonstrated. The sensor is based on a polydimethylsiloxane (PDMS) cavity molded at the end of a single-mode fiber, acting as a low-finesse Fabry-Perot (FP) interferometer. The working point tracking is achieved by using a low cost white-light interferometric system with a simple tunable FP filter. By real-time adjusting the optical path difference of the FP filter, the sensor working point can be kept at its highest sensitivity point. This helps address the sensor working point drift due to hydrostatic pressure, water absorption, and/or temperature changes. It is demonstrated that the sensor system has a high resolution with a minimum detectable acoustic pressure of 148 Pa and superior stability compared to a system using a tunable laser. PMID:27557180

  7. A new high-performance heterologous fungal expression system based on regulatory elements from the Aspergillus terreus terrein gene cluster.

    PubMed

    Gressler, Markus; Hortschansky, Peter; Geib, Elena; Brock, Matthias

    2015-01-01

    Recently, the Aspergillus terreus terrein gene cluster was identified and selected for development of a new heterologous expression system. The cluster encodes the specific transcription factor TerR that is indispensable for terrein cluster induction. To identify TerR binding sites, different recombinant versions of the TerR DNA-binding domain were analyzed for specific motif recognition. The high affinity consensus motif TCGGHHWYHCGGH was identified from genes required for terrein production and binding site mutations confirmed their essential contribution to gene expression in A. terreus. A combination of TerR with its terA target promoter was tested as recombinant expression system in the heterologous host Aspergillus niger. TerR mediated target promoter activation was directly dependent on its transcription level. Therefore, terR was expressed under control of the regulatable amylase promoter PamyB and the resulting activation of the terA target promoter was compared with activation levels obtained from direct expression of reporters from the strong gpdA control promoter. Here, the coupled system outcompeted the direct expression system. When the coupled system was used for heterologous polyketide synthase expression high metabolite levels were produced. Additionally, expression of the Aspergillus nidulans polyketide synthase gene orsA revealed lecanoric acid rather than orsellinic acid as major polyketide synthase product. Domain swapping experiments assigned this depside formation from orsellinic acid to the OrsA thioesterase domain. These experiments confirm the suitability of the expression system especially for high-level metabolite production in heterologous hosts. PMID:25852654

  8. A new high-performance heterologous fungal expression system based on regulatory elements from the Aspergillus terreus terrein gene cluster

    PubMed Central

    Gressler, Markus; Hortschansky, Peter; Geib, Elena; Brock, Matthias

    2015-01-01

    Recently, the Aspergillus terreus terrein gene cluster was identified and selected for development of a new heterologous expression system. The cluster encodes the specific transcription factor TerR that is indispensable for terrein cluster induction. To identify TerR binding sites, different recombinant versions of the TerR DNA-binding domain were analyzed for specific motif recognition. The high affinity consensus motif TCGGHHWYHCGGH was identified from genes required for terrein production and binding site mutations confirmed their essential contribution to gene expression in A. terreus. A combination of TerR with its terA target promoter was tested as recombinant expression system in the heterologous host Aspergillus niger. TerR mediated target promoter activation was directly dependent on its transcription level. Therefore, terR was expressed under control of the regulatable amylase promoter PamyB and the resulting activation of the terA target promoter was compared with activation levels obtained from direct expression of reporters from the strong gpdA control promoter. Here, the coupled system outcompeted the direct expression system. When the coupled system was used for heterologous polyketide synthase expression high metabolite levels were produced. Additionally, expression of the Aspergillus nidulans polyketide synthase gene orsA revealed lecanoric acid rather than orsellinic acid as major polyketide synthase product. Domain swapping experiments assigned this depside formation from orsellinic acid to the OrsA thioesterase domain. These experiments confirm the suitability of the expression system especially for high-level metabolite production in heterologous hosts. PMID:25852654

  9. Design of a high-performance slide and drive system for a small precision machining research lathe

    SciTech Connect

    Donaldson, R.R.; Maddux, A.S.

    1984-03-01

    The development of high-accuracy machine tools, principally through interest in diamond turning, plus the availability of new cutting tool materials, offers the possibility of improving workpiece accuracy for a much larger variety of materials than that addressed by diamond tools. This paper describes the design and measured performance of a slideway and servo-drive system for a small lathe intended as a tool for research on the above subject, with emphasis on the servo-control design. The slide system provides high accuracy and stiffness over a travel of 100 mm, utilizing oil hydrostatic bearings and a capstan roller drive with integral dc motor and tachometer.

  10. Process innovation in high-performance systems: From polymeric composites R&D to design and build of airplane showers

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Jui

    In the aerospace industry reducing aircraft weight is key because it increases flight performance and drives down operating costs. With fierce competition in the commercial aircraft industry, companies that focused primarily on exterior aircraft performance design issues are turning more attention to the design of aircraft interior. Simultaneously, there has been an increase in the number of new amenities offered to passengers especially in first class travel and executive jets. These new amenities present novel and challenging design parameters that include integration into existing aircraft systems without sacrificing flight performance. The objective of this study was to design a re-circulating shower system for an aircraft that weighs significantly less than pre-existing shower designs. This was accomplished by integrating processes from polymeric composite materials, water filtration, and project management. Carbon/epoxy laminates exposed to hygrothermal cycling conditions were evaluated and compared to model calculations. Novel materials and a variety of fabrication processes were developed to create new types of paper for honeycomb applications. Experiments were then performed on the properties and honeycomb processability of these new papers. Standard water quality tests were performed on samples taken from the re-circulating system to see if current regulatory standards were being met. These studies were executed and integrated with tools from project management to design a better shower system for commercial aircraft applications.

  11. Development of an ultra-high performance multi-turn TOF-SIMS/SNMS system "MULTUM-SIMS/SNMS".

    PubMed

    Ebata, Shingo; Ishihara, Morio; Kumondai, Kousuke; Mibuka, Ryo; Uchino, Kiichiro; Yurimoto, Hisayoshi

    2013-02-01

    A new system incorporating a multi-turn time-of-flight secondary ion/sputtered neutral mass spectrometer (TOF-SIMS/SNMS) with laser post-ionization was designed and constructed. This system consists of a gallium focused ion beam, femtosecond (fs) laser for post-ionization, and multi-turn TOF mass spectrometer. When laser post-ionization was used, the secondary ion signal strengths for several metals increased by up to 650 times, and were greater than the values obtained in conventional TOF-SIMS experiments. Use of the multi-turn mass spectrometer resulted in an increase in mass resolving power with increase in the total TOF. The mass resolving power reached to 23,000 after 800 multi-turn cycles, corresponding to a flight path length of 1040 m. These results indicated that this system is very effective for the analysis of valuable materials such as space samples with high sensitivity, high mass resolving power, and high lateral resolution. PMID:23292978

  12. Comparison of ultrasonic and thermospray systems for high performance sample introduction to inductively coupled plasma atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Conver, Timothy S.; Koropchak, John A.

    1995-06-01

    This paper describes detailed work done in our lab to compare analytical figures of merit for pneumatic, ultrasonic and thermospray sample introduction (SI) systems with three different inductively coupled plasma-atomic emission spectrometry (ICP-AES) instruments. One instrument from Leeman Labs, Inc. has an air path echelle spectrometer and a 27 MHz ICP. For low dissolved solid samples with this instrument, we observed that the ultrasonic nebulizer (USN) and fused silica aperture thermospray (FSApT) both offered similar LOD improvements as compared to pneumatic nebulization (PN), 14 and 16 times, respectively. Average sensitivities compared to PN were better for the USN, by 58 times, compared to 39 times for the FSApT. For solutions containing high dissolved solids we observed that FSApT optimized at the same conditions as for low dissolved solids, whereas USN required changes in power and gas flows to maintain a stable discharge. These changes degraded the LODs for USN substantially as compared to those utilized for low dissolved solid solutions, limiting improvement compared to PN to an average factor of 4. In general, sensitivities for USN were degraded at these new conditions. When solutions with 3000 μg/g Ca were analyzed, LOD improvements were smaller for FSApT and USN, but FSApT showed an improvement over USN of 6.5 times. Sensitivities compared to solutions without high dissolved solids were degraded by 19% on average for FSApT, while those for USN were degraded by 26%. The SI systems were also tested with a Varian Instruments Liberty 220 having a vacuum path Czerny-Turner monochromator and a 40 MHz generator. The sensitivities with low dissolved solids solutions compared to PN were 20 times better for the USN and 39 times better for FSApT, and LODs for every element were better for FSApT. Better correlation between relative sensitivities and anticipated relative analyte mass fluxes for FSApT and USN was observed with the Varian instrument. LOD

  13. Making resonance a common case: a high-performance implementation of collective I/O on parallel file systems

    SciTech Connect

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2009-01-01

    Collective I/O is a widely used technique to improve I/O performance in parallel computing. It can be implemented as a client-based or server-based scheme. The client-based implementation is more widely adopted in MPI-IO software such as ROMIO because of its independence from the storage system configuration and its greater portability. However, existing implementations of client-side collective I/O do not take into account the actual pattern offile striping over multiple I/O nodes in the storage system. This can cause a significant number of requests for non-sequential data at I/O nodes, substantially degrading I/O performance. Investigating the surprisingly high I/O throughput achieved when there is an accidental match between a particular request pattern and the data striping pattern on the I/O nodes, we reveal the resonance phenomenon as the cause. Exploiting readily available information on data striping from the metadata server in popular file systems such as PVFS2 and Lustre, we design a new collective I/O implementation technique, resonant I/O, that makes resonance a common case. Resonant I/O rearranges requests from multiple MPI processes to transform non-sequential data accesses on I/O nodes into sequential accesses, significantly improving I/O performance without compromising the independence ofa client-based implementation. We have implemented our design in ROMIO. Our experimental results show that the scheme can increase I/O throughput for some commonly used parallel I/O benchmarks such as mpi-io-test and ior-mpi-io over the existing implementation of ROMIO by up to 157%, with no scenario demonstrating significantly decreased performance.

  14. MO-G-17A-01: Innovative High-Performance PET Imaging System for Preclinical Imaging and Translational Researches

    SciTech Connect

    Sun, X; Lou, K; Deng, Z; Shao, Y

    2014-06-15

    Purpose: To develop a practical and compact preclinical PET with innovative technologies for substantially improved imaging performance required for the advanced imaging applications. Methods: Several key components of detector, readout electronics and data acquisition have been developed and evaluated for achieving leapfrogged imaging performance over a prototype animal PET we had developed. The new detector module consists of an 8×8 array of 1.5×1.5×30 mm{sup 3} LYSO scintillators with each end coupled to a latest 4×4 array of 3×3 mm{sup 2} Silicon Photomultipliers (with ∼0.2 mm insensitive gap between pixels) through a 2.0 mm thick transparent light spreader. Scintillator surface and reflector/coupling were designed and fabricated to reserve air-gap to achieve higher depth-of-interaction (DOI) resolution and other detector performance. Front-end readout electronics with upgraded 16-ch ASIC was newly developed and tested, so as the compact and high density FPGA based data acquisition and transfer system targeting 10M/s coincidence counting rate with low power consumption. The new detector module performance of energy, timing and DOI resolutions with the data acquisition system were evaluated. Initial Na-22 point source image was acquired with 2 rotating detectors to assess the system imaging capability. Results: No insensitive gaps at the detector edge and thus it is capable for tiling to a large-scale detector panel. All 64 crystals inside the detector were clearly separated from a flood-source image. Measured energy, timing, and DOI resolutions are around 17%, 2.7 ns and 1.96 mm (mean value). Point source image is acquired successfully without detector/electronics calibration and data correction. Conclusion: Newly developed advanced detector and readout electronics will be enable achieving targeted scalable and compact PET system in stationary configuration with >15% sensitivity, ∼1.3 mm uniform imaging resolution, and fast acquisition counting rate

  15. Compensation of Wave-Induced Motion and Force Phenomena for Ship-Based High Performance Robotic and Human Amplifying Systems

    SciTech Connect

    Love, LJL

    2003-09-24

    The decrease in manpower and increase in material handling needs on many Naval vessels provides the motivation to explore the modeling and control of Naval robotic and robotic assistive devices. This report addresses the design, modeling, control and analysis of position and force controlled robotic systems operating on the deck of a moving ship. First we provide background information that quantifies the motion of the ship, both in terms of frequency and amplitude. We then formulate the motion of the ship in terms of homogeneous transforms. This transformation provides a link between the motion of the ship and the base of a manipulator. We model the kinematics of a manipulator as a serial extension of the ship motion. We then show how to use these transforms to formulate the kinetic and potential energy of a general, multi-degree of freedom manipulator moving on a ship. As a demonstration, we consider two examples: a one degree-of-freedom system experiencing three sea states operating in a plane to verify the methodology and a 3 degree of freedom system experiencing all six degrees of ship motion to illustrate the ease of computation and complexity of the solution. The first series of simulations explore the impact wave motion has on tracking performance of a position controlled robot. We provide a preliminary comparison between conventional linear control and Repetitive Learning Control (RLC) and show how fixed time delay RLC breaks down due to the varying nature wave disturbance frequency. Next, we explore the impact wave motion disturbances have on Human Amplification Technology (HAT). We begin with a description of the traditional HAT control methodology. Simulations show that the motion of the base of the robot, due to ship motion, generates disturbances forces reflected to the operator that significantly degrade the positioning accuracy and resolution at higher sea states. As with position-controlled manipulators, augmenting the control with a Repetitive

  16. High performance dash-on-warning air mobile missile system. [first strike avoidance for retaliatory aircraft-borne ICBM counterattack

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Levin, A. D.

    1978-01-01

    Because fixed missile bases have become increasingly vulnerable to strategic nuclear attack, an air-mobile missile system is proposed, whereby ICBMs can be launched from the hold of large subsonic aircraft following a missile-assisted supersonic dash of the aircraft to a safe distance from their base (about 50 n mi). Three major categories of vehicle design are presented: staged, which employs vertical take-off and a single solid rocket booster similar to that used on the Space Shuttle; unstaged, which employs vertical take-off and four internally-carried reusable liquid rocket engines; and alternative concepts, some using horizontal take-off with duct-burning afterburners. Attention is given to the economics of maintaining 200 ICBMs airborne during an alert (about $600 million for each fleet alert, exclusive of acquisition costs). The chief advantages of the system lie in its reduced vulnerability to suprise attack, because it can be launched on warning, and in the possibility for recall of the aircraft if the warning proves to be a false alarm.

  17. High performance nuclear thermal propulsion system for near term exploration missions to 100 A.U. and beyond

    NASA Astrophysics Data System (ADS)

    Powell, James R.; Paniagua, John; Maise, George; Ludewig, Hans; Todosow, Michael

    1999-05-01

    A new compact ultra light nuclear reactor engine design termed MITEE (MIniature Reac Tor EnginE) is described. MITEE heats hydrogen propellant to 3000 K, achieving a specific impulse of 1000 seconds and a thrust-to-weight of 10. Total engine mass is 200 kg, including reactor, pump, auxiliaries and a 30% contingency. MITEE enables many types of new and unique missions to the outer solar system not possible with chemical engines. Examples include missions to 100 A.U. in less than 10 years, flybys of Pluto in 5 years, sample return from Pluto and the moons of the outer planets, unlimited ramjet flight in planetary atmospheres, etc. Much of the necessary technology for MITEE already exists as a result of previous nuclear rocket development programs. With some additional development, initial MITEE missions could begin in only 6 years.

  18. Sustaining High Performance in Bad Times.

    ERIC Educational Resources Information Center

    Bassi, Laurie J.; Van Buren, Mark A.

    1997-01-01

    Summarizes the results of the American Society for Training and Development Human Resource and Performance Management Survey of 1996 that examined the performance outcomes of downsizing and high performance work systems, explored the relationship between high performance work systems and downsizing, and asked whether some downsizing practices were…

  19. High performance liquid level monitoring system based on polymer fiber Bragg gratings embedded in silicone rubber diaphragms

    NASA Astrophysics Data System (ADS)

    Marques, Carlos A. F.; Peng, Gang-Ding; Webb, David J.

    2015-05-01

    Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using

  20. Simultaneous determination of five systemic azoles in plasma by high-performance liquid chromatography with ultraviolet detection.

    PubMed

    Gordien, Jean-Baptiste; Pigneux, Arnaud; Vigouroux, Stephane; Tabrizi, Reza; Accoceberry, Isabelle; Bernadou, Jean-Marc; Rouault, Audrey; Saux, Marie-Claude; Breilh, Dominique

    2009-12-01

    A simple, specific and automatable HPLC assay was developed for a simultaneous determination of systemic azoles (fluconazole, posaconazole, voriconazole, itraconazole and its metabolite hydroxyl-itraconazole, and ketoconazole) in plasma. The major advantage of this assay was sample preparation by a fully automatable solid phase extraction with Varian Plexa cartridges. C6-phenyl column was used for chromatographic separation, and UV detection was set at a wavelength of 260 nm. Linezolid was used as an internal standard. The assay was specific and linear over the concentration range of 0.05 to 40 microg/ml excepted for fluconazole which was between 0.05 and 100 microg/ml, and itraconazole between 0.1 and 40 microg/ml. Validation data for accuracy and precision for intra- and inter-day were good and satisfied FDA's guidance: CV between 0.24% and 11.66% and accuracy between 93.8% and 108.7% for all molecules. This assay was applied to therapeutic drug monitoring on patients hospitalized in intensive care and onco-hematologic units. PMID:19608374

  1. Assignment of ozone-sensitive tryptophan residue in tryptophanase by a dual-monitoring high-performance liquid chromatography system

    SciTech Connect

    Ida, N.; Tokushige, M.

    1985-02-01

    Tryptophanase purified from Escherichia coli B/1t7-A is inactivated by mild ozonization following pseudo-first-order kinetics. Previous data from the authors suggest that one out of two tryptophan residues (Trp's) in the enzyme subunit is preferentially oxidized concomitant with the ozone inactivation and has a direct interaction with the coenzyme, pyridoxal phosphate. To determine which Trp is more susceptible to ozonization and interacts with PLP, the native and ozonized enzyme proteins were cleaved by trypsin and the two Trp-containing peptides were analyzed by reverse-phase HPLC equipped with a dual-monitoring system consisting of an uv and a fluorescence monitor connected in tandem for selective detection of Trp-containing peptides. This device facilitated rapid detection and quantitation of the Trp-containing peptides which decreased upon ozonization. The results showed that Trp preferentially oxidized upon ozonization and involved in the interaction with PLP was the one in peptide T-15 rather than that in T-23, which Kagamiyama et al. originally designated.

  2. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF). Volume 1, Final report

    SciTech Connect

    1996-02-01

    A major objective of the coal-fired high performance power systems (HIPPS) program is to achieve significant increases in the thermodynamic efficiency of coal use for electric power generation. Through increased efficiency, all airborne emissions can be decreased, including emissions of carbon dioxide. High Performance power systems as defined for this program are coal-fired, high efficiency systems where the combustion products from coal do not contact the gas turbine. Typically, this type of a system will involve some indirect heating of gas turbine inlet air and then topping combustion with a cleaner fuel. The topping combustion fuel can be natural gas or another relatively clean fuel. Fuel gas derived from coal is an acceptable fuel for the topping combustion. The ultimate goal for HIPPS is to, have a system that has 95 percent of its heat input from coal. Interim systems that have at least 65 percent heat input from coal are acceptable, but these systems are required to have a clear development path to a system that is 95 percent coal-fired. A three phase program has been planned for the development of HIPPS. Phase 1, reported herein, includes the development of a conceptual design for a commercial plant. Technical and economic feasibility have been analysed for this plant. Preliminary R&D on some aspects of the system were also done in Phase 1, and a Research, Development and Test plan was developed for Phase 2. Work in Phase 2 include s the testing and analysis that is required to develop the technology base for a prototype plant. This work includes pilot plant testing at a scale of around 50 MMBtu/hr heat input. The culmination of the Phase 2 effort will be a site-specific design and test plan for a prototype plant. Phase 3 is the construction and testing of this plant.

  3. Materials for tomorrow`s infrastructure: A ten-year plan for deploying high-performance construction materials and systems. Technical report

    SciTech Connect

    Belle, R.A.; Almand, K.H.

    1994-12-27

    This report presents a detailed program to transform our nation`s infrastructure. The Intended audience Is the Administration and Congress, other national policy makers, and government and industry leaders. Descriptions of major high-performance research and commercialization projects are provided by working groups representing ten different materials: aluminum, coatings, fiber-reinforced polymer composites, concrete, hot mix asphalt. masonry, roofing materials, smart material devices and monitoring systems, steel, and wood. The report builds on the 1991 National Civil Engineering Research Needs Forum organized by the Civil Engineering Research Foundation (CERF) and the 1993 initial program plan as presented in High-Performonce Construction Materials ond Systems: An Essentiol Program for America and its infrastructure. The high-performance CONstruction MATerials and systems program (CONMAT) will create significant improvements in the nation`s infrastructure and U.S. competitiveness in the construction market. The report concludes by reviewing the strong support that the Administration has shown the CONMAT research effort to date and recommends continued support from government, industry, and academia to support this critical initiative.

  4. System Software and Tools for High Performance Computing Environments: A report on the findings of the Pasadena Workshop, April 14--16, 1992

    SciTech Connect

    Sterling, T.; Messina, P.; Chen, M.

    1993-04-01

    The Pasadena Workshop on System Software and Tools for High Performance Computing Environments was held at the Jet Propulsion Laboratory from April 14 through April 16, 1992. The workshop was sponsored by a number of Federal agencies committed to the advancement of high performance computing (HPC) both as a means to advance their respective missions and as a national resource to enhance American productivity and competitiveness. Over a hundred experts in related fields from industry, academia, and government were invited to participate in this effort to assess the current status of software technology in support of HPC systems. The overall objectives of the workshop were to understand the requirements and current limitations of HPC software technology and to contribute to a basis for establishing new directions in research and development for software technology in HPC environments. This report includes reports written by the participants of the workshop`s seven working groups. Materials presented at the workshop are reproduced in appendices. Additional chapters summarize the findings and analyze their implications for future directions in HPC software technology development.

  5. Ionic liquid-based aqueous two-phase system, a sample pretreatment procedure prior to high-performance liquid chromatography of opium alkaloids.

    PubMed

    Li, Shehong; He, Chiyang; Liu, Huwei; Li, Kean; Liu, Feng

    2005-11-01

    An ionic liquid, 1-butyl-3-methylimidazolium chloride ([C4 mim]Cl)/salt aqueous two-phase systems (ATPS) was presented as a simple, rapid and effective sample pretreatment technique coupled with high-performance liquid chromatography (HPLC) for analysis of the major opium alkaloids in Pericarpium papaveris. To find optimal conditions, the partition behaviors of codeine and papaverine in ionic liquid/salt aqueous two-phase systems were investigated. Various factors were considered systematically, and the results indicated that both the pH value and the salting-out ability of salt had great influence on phase separation. The recoveries of codeine and papaverine were 90.0-100.2% and 99.3-102.0%, respectively, from aqueous samples of P. papaveris by the proposed method. PMID:16143571

  6. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  7. Determination of sunset yellow and tartrazine in food samples by combining ionic liquid-based aqueous two-phase system with high performance liquid chromatography.

    PubMed

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01-50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method. PMID:25538857

  8. Determination of Sunset Yellow and Tartrazine in Food Samples by Combining Ionic Liquid-Based Aqueous Two-Phase System with High Performance Liquid Chromatography

    PubMed Central

    Sha, Ou; Zhu, Xiashi; Feng, Yanli; Ma, Weixing

    2014-01-01

    We proposed a simple and effective method, by coupling ionic liquid-based aqueous two-phase systems (IL-ATPSs) with high performance liquid chromatography (HPLC), for the analysis of determining tartrazine and sunset yellow in food samples. Under the optimized conditions, IL-ATPSs generated an extraction efficiency of 99% for both analytes, which could then be directly analyzed by HPLC without further treatment. Calibration plots were linear in the range of 0.01–50.0 μg/mL for both Ta and SY. The limits of detection were 5.2 ng/mL for Ta and 6.9 ng/mL for SY. This method proves successful for the separation/analysis of tartrazine and sunset yellow in soft drink sample, candy sample, and instant powder drink and leads to consistent results as obtained from the Chinese national standard method. PMID:25538857

  9. Developing collective customer knowledge and service climate: The interaction between service-oriented high-performance work systems and service leadership.

    PubMed

    Jiang, Kaifeng; Chuang, Chih-Hsun; Chiao, Yu-Ching

    2015-07-01

    This study theorized and examined the influence of the interaction between Service-Oriented high-performance work systems (HPWSs) and service leadership on collective customer knowledge and service climate. Using a sample of 569 employees and 142 managers in footwear retail stores, we found that Service-Oriented HPWSs and service leadership reduced the influences of one another on collective customer knowledge and service climate, such that the positive influence of service leadership on collective customer knowledge and service climate was stronger when Service-Oriented HPWSs were lower than when they were higher or the positive influence of Service-Oriented HPWSs on collective customer knowledge and service climate was stronger when service leadership was lower than when it was higher. We further proposed and found that collective customer knowledge and service climate were positively related to objective financial outcomes through service performance. Implications for the literature and managerial practices are discussed. PMID:25486260

  10. Development of a temperature-compensated hot-film anemometer system for boundary-layer transition detection on high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Chiles, H. R.; Johnson, J. B.

    1985-01-01

    A hot-film constant-temperature anemometer (CTA) system was flight-tested and evaluated as a candidate sensor for determining boundary-layer transition on high-performance aircraft. The hot-film gage withstood an extreme flow environment characterized by shock waves and high dynamic pressures, although sensitivity to the local total temperature with the CTA indicated the need for some form of temperature compensation. A temperature-compensation scheme was developed and two CTAs were modified and flight-tested on the F-104/Flight Test Fixture (FTF) facility at a variety of Mach numbers and altitudes, ranging from 0.4 to 1.8 and 5,000 to 40,000 ft respectively.

  11. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  12. High Performance Computing Today

    SciTech Connect

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  13. Development of a high-performance, coal-fired power generating system with a pyrolysis gas and char-fired high-temperature furnace

    SciTech Connect

    Shenker, J.

    1995-11-01

    A high-performance power system (HIPPS) is being developed. This system is a coal-fired, combined-cycle plant that will have an efficiency of at least 47 percent, based on the higher heating value of the fuel. The original emissions goal of the project was for NOx and SOx to each be below 0.15 lb/MMBtu. In the Phase 2 RFP this emissions goal was reduced to 0.06 lb/MMBtu. The ultimate goal of HIPPS is to have an all-coal-fueled system, but initial versions of the system are allowed up to 35 percent heat input from natural gas. Foster Wheeler Development Corporation is currently leading a team effort with AlliedSignal, Bechtel, Foster Wheeler Energy Corporation, Research-Cottrell, TRW and Westinghouse. Previous work on the project was also done by General Electric. The HIPPS plant will use a high-Temperature Advanced Furnace (HITAF) to achieve combined-cycle operation with coal as the primary fuel. The HITAF is an atmospheric-pressure, pulverized-fuel-fired boiler/air heater. The HITAF is used to heat air for the gas turbine and also to transfer heat to the steam cycle. its design and functions are very similar to conventional PC boilers. Some important differences, however, arise from the requirements of the combined cycle operation.

  14. START High Performance Discharges

    NASA Astrophysics Data System (ADS)

    Gates, D. A.

    1997-11-01

    Improvements to START (Small Tight Aspect Ratio Tokamak), the first spherical tokamak in the world to achieve high plasma temperature with both a significant pulse length and confinement time, have been ongoing since 1991. Recent modifications include: expansion of the existing capacitor banks allowing plasma currents as high as 300kA, an increase in the available neutral beam heating power ( ~ 500kW), and improvements to the vacuum system. These improvements have led to the achievement of the world record plasma β (≡ 2μ_0 /B^2) of ~ 30% in a tokamak. The normalised β ( βN ≡ β aB/I_p) reached 4.5 with q_95 = 2.3. Properties of the reconstructed equilibrium will be discussed in detail. The theoretical limit to β is higher in a spherical tokamak than in a conventional machine, due to the higher values of normalised current (IN ≡ I_p/aB) achievable at low aspect ratio. The record β was achieved with IN ~ 8 while conventional tokamaks are limited to IN ~ 3, or less. Calculations of the ideal MHD stability of the record discharge indicate high β low-n kink modes are stable, but that the entire profile is at or near marginal stability for high-n ballooning modes. The phenomenology of the events leading up to the plasma termination is discussed. An important aspect of the START program is to explore the physics of neutral beam absorption at low aspect ratio. A passive neutral particle analyser has been used to study the temporal and spatial dependence of the fast hydrogen beam ions. These measurements have been used in conjunction with a single particle orbit code to estimate the fast ion losses due to collisions with slow neutrals from the plasma edge. Numerical analysis of neutral beam power deposition profiles are compared with the data from an instrumented beam stop. The global energy confinement time τE in beam heated discharges on START is similar to that obtained in Ohmic discharges, even though the input power has roughly doubled over the Ohmic case

  15. Selective extraction and determination of vitamin B12 in urine by ionic liquid-based aqueous two-phase system prior to high-performance liquid chromatography.

    PubMed

    Berton, Paula; Monasterio, Romina P; Wuilloud, Rodolfo G

    2012-08-15

    A rapid and simple extraction technique based on aqueous two-phase system (ATPS) was developed for separation and enrichment of vitamin B(12) in urine samples. The proposed ATPS-based method involves the application of the hydrophilic ionic liquid (IL) 1-hexyl-3-methylimidazolium chloride and K(2)HPO(4). After the extraction procedure, the vitamin B(12)-enriched IL upper phase was directly injected into the high performance liquid chromatography (HPLC) system for analysis. All variables influencing the IL-based ATPS approach (e.g., the composition of ATPS, pH and temperature values) were evaluated. The average extraction efficiency was 97% under optimum conditions. Only 5.0 mL of sample and a single hydrolysis/deproteinization/extraction step were required, followed by direct injection of the IL-rich upper phase into HPLC system for vitamin B(12) determination. A detection limit of 0.09 μg mL(-1), a relative standard deviation (RSD) of 4.50% (n=10) and a linear range of 0.40-8.00 μg mL(-1) were obtained. The proposed green analytical procedure was satisfactorily applied to the analysis of samples with highly complex matrices, such as urine. Finally, the IL-ATPS technique could be considered as an efficient tool for the water-soluble vitamin B(12) extraction. PMID:22841117

  16. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology

    PubMed Central

    Foran, David J; Yang, Lin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    Objective and design The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. Results The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples. PMID:21606133

  17. FPGA Based High Performance Computing

    SciTech Connect

    Bennett, Dave; Mason, Jeff; Sundararajan, Prasanna; Dellinger, Erik; Putnam, Andrew; Storaasli, Olaf O

    2008-01-01

    Current high performance computing (HPC) applications are found in many consumer, industrial and research fields. From web searches to auto crash simulations to weather predictions, these applications require large amounts of power by the compute farms and supercomputers required to run them. The demand for more and faster computation continues to increase along with an even sharper increase in the cost of the power required to operate and cool these installations. The ability of standard processor based systems to address these needs has declined in both speed of computation and in power consumption over the past few years. This paper presents a new method of computation based upon programmable logic as represented by Field Programmable Gate Arrays (FPGAs) that addresses these needs in a manner requiring only minimal changes to the current software design environment.

  18. High Performance Fortran: An overview

    SciTech Connect

    Zosel, M.E.

    1992-12-23

    The purpose of this paper is to give an overview of the work of the High Performance Fortran Forum (HPFF). This group of industry, academic, and user representatives has been meeting to define a set of extensions for Fortran dedicated to the special problems posed by a very high performance computers, especially the new generation of parallel computers. The paper describes the HPFF effort and its goals and gives a brief description of the functionality of High Performance Fortran (HPF).

  19. Use of ambient light in remote photoplethysmographic systems: comparison between a high-performance camera and a low-cost webcam

    NASA Astrophysics Data System (ADS)

    Sun, Yu; Papin, Charlotte; Azorin-Peris, Vicente; Kalawsky, Roy; Greenwald, Stephen; Hu, Sijung

    2012-03-01

    Imaging photoplethysmography (PPG) is able to capture useful physiological data remotely from a wide range of anatomical locations. Recent imaging PPG studies have concentrated on two broad research directions involving either high-performance cameras and or webcam-based systems. However, little has been reported about the difference between these two techniques, particularly in terms of their performance under illumination with ambient light. We explore these two imaging PPG approaches through the simultaneous measurement of the cardiac pulse acquired from the face of 10 male subjects and the spectral characteristics of ambient light. Measurements are made before and after a period of cycling exercise. The physiological pulse waves extracted from both imaging PPG systems using the smoothed pseudo-Wigner-Ville distribution yield functional characteristics comparable to those acquired using gold standard contact PPG sensors. The influence of ambient light intensity on the physiological information is considered, where results reveal an independent relationship between the ambient light intensity and the normalized plethysmographic signals. This provides further support for imaging PPG as a means for practical noncontact physiological assessment with clear applications in several domains, including telemedicine and homecare.

  20. Role of information systems in controlling costs: the electronic medical record (EMR) and the high-performance computing and communications (HPCC) efforts

    NASA Astrophysics Data System (ADS)

    Kun, Luis G.

    1994-12-01

    On October 18, 1991, the IEEE-USA produced an entity statement which endorsed the vital importance of the High Performance Computer and Communications Act of 1991 (HPCC) and called for the rapid implementation of all its elements. Efforts are now underway to develop a Computer Based Patient Record (CBPR), the National Information Infrastructure (NII) as part of the HPCC, and the so-called `Patient Card'. Multiple legislative initiatives which address these and related information technology issues are pending in Congress. Clearly, a national information system will greatly affect the way health care delivery is provided to the United States public. Timely and reliable information represents a critical element in any initiative to reform the health care system as well as to protect and improve the health of every person. Appropriately used, information technologies offer a vital means of improving the quality of patient care, increasing access to universal care and lowering overall costs within a national health care program. Health care reform legislation should reflect increased budgetary support and a legal mandate for the creation of a national health care information system by: (1) constructing a National Information Infrastructure; (2) building a Computer Based Patient Record System; (3) bringing the collective resources of our National Laboratories to bear in developing and implementing the NII and CBPR, as well as a security system with which to safeguard the privacy rights of patients and the physician-patient privilege; and (4) utilizing Government (e.g. DOD, DOE) capabilities (technology and human resources) to maximize resource utilization, create new jobs and accelerate technology transfer to address health care issues.

  1. High-performance intraoperative cone-beam CT on a mobile C-arm: an integrated system for guidance of head and neck surgery

    NASA Astrophysics Data System (ADS)

    Siewerdsen, J. H.; Daly, M. J.; Chan, H.; Nithiananthan, S.; Hamming, N.; Brock, K. K.; Irish, J. C.

    2009-02-01

    A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video. Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such systems in complex OR environments.

  2. High-performance size exclusion chromatography with a multi-wavelength absorbance detector study on dissolved organic matter characterisation along a water distribution system.

    PubMed

    Huang, Huiping; Sawade, Emma; Cook, David; Chow, Christopher W K; Drikas, Mary; Jin, Bo

    2016-06-01

    This study examined the associations between dissolved organic matter (DOM) characteristics and potential nitrification occurrence in the presence of chloramine along a drinking water distribution system. High-performance size exclusion chromatography (HPSEC) coupled with a multiple wavelength detector (200-280nm) was employed to characterise DOM by molecular weight distribution, bacterial activity was analysed using flow cytometry, and a package of simple analytical tools, such as dissolved organic carbon, absorbance at 254nm, nitrate, nitrite, ammonia and total disinfectant residual were also applied and their applicability to indicate water quality changes in distribution systems were also evaluated. Results showed that multi-wavelength HPSEC analysis was useful to provide information about DOM character while changes in molecule weight profiles at wavelengths less than 230nm were also able to be related to other water quality parameters. Correct selection of the UV wavelengths can be an important factor for providing appropriate indicators associated with different DOM compositions. DOM molecular weight in the range of 0.2-0.5kDa measured at 210nm correlated positively with oxidised nitrogen concentration (r=0.99), and the concentrations of active bacterial cells in the distribution system (r=0.85). Our study also showed that the changes of DOM character and bacterial cells were significant in those sampling points that had decreases in total disinfectant residual. HPSEC-UV measured at 210nm and flow cytometry can detect the changes of low molecular weight of DOM and bacterial levels, respectively, when nitrification occurred within the chloraminated distribution system. PMID:27266320

  3. Development of a novel cytochrome p450 bioaffinity detection system coupled online to gradient reversed-phase high-performance liquid chromatography.

    PubMed

    Kool, Jeroen; van Liempd, Sebastiaan M; Ramautar, Rawi; Schenk, Tim; Meerman, John H N; Irth, Hubertus; Commandeur, Jan N M; Vermeulen, Nico P E

    2005-08-01

    A high-resolution screening platform, coupling online affinity detection for mammalian cytochrome P450s (Cyt P450s) to gradient reversed-phase high-performance liquid chromatography (HPLC), is described. To this end, the online Cyt P450 enzyme affinity detection (EAD) system was optimized for enzyme (beta-NF-induced rat liver microsomes), probe substrate (ethoxyresorufine), and organic modifier (methanol or acetonitrile). The optimized Cyt P450 EAD system has first been evaluated in a flow injection analysis (FIA) mode with 7 known ligands of Cyt P450 1A1/1A2 (alpha-naphthoflavone, beta-naphthoflavone, ellipticine, 9-hydroxy-ellipticine, fluvoxamine, caffein, and phenacetin). Subsequently, IC50 values were online in FIA-mode determined and compared with those obtained with standardmicrosomal assay conditions. The IC50 values obtained with the online Cyt P450 EAD system agreed well with the IC50 values obtained in the standard assays. For high affinity ligands of Cyt P450 1A1/1A2, detection limits of 1 to 3 pmol injected (n=3; signal to noise [S/N]=3) were obtained. The individual inhibitory properties of ligands in mixtures of the ligands were subsequently investigated using an optimized Cyt P450 EAD system online coupled to gradient HPLC. Using the integrated online gradient HPLC Cyt P450 EAD platform, detection limits of 10 to 25 pmol injected (n=1; S/N=3) were obtained for high-affinity ligands. It is concluded that this novel screening technology offers new perspectives for rapid and sensitive screening of individual compounds in mixtures exhibiting affinity for liver microsomal Cyt P450s. PMID:16093552

  4. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  5. Laser videofluorometer system for real-time characterization of high-performance liquid chromatographic eluate. [3-hydroxy-benzo(a)pyrene

    SciTech Connect

    Skoropinski, D.B.; Callis, J.B.; Danielson, J.D.S.; Christian, G.D.

    1986-11-01

    A second generation videofluorometer has been developed for real-time characterization of high-performance liquid chromatographic eluate. The instrument features a nitrogen-laser-pumped dye laser as excitation source and quarter meter polychromator/microchannel plate-intensified diode array as fluorescence detector. The dye laser cavity is tuned with a moving-iron galvanometer scanner grating drive, permitting the laser output to be changed to any wavelength in its range in less than 40 ms. Thus, the optimum excitation wavelength can be chosen for each chromatographic region. A minimum detection limit of 13 pptr has been obtained for 3-hydroxy-benzo(a)pyrene in a conventional fluorescence cuvette with a 30-s data acquisition. For the same substance eluted chromatographically, a minimum detection limit of 50 pg has been obtained, and a linear dynamic range of greater than 3 orders of magnitude observed. An extract of soil that had been contaminated with polyaromatic hydrocarbons was analyzed as a practical test of the system, permitting the quantitation of three known species, and the identification and quantitation of a previously unknown fourth compound.

  6. A meta-analysis of country differences in the high-performance work system-business performance relationship: the roles of national culture and managerial discretion.

    PubMed

    Rabl, Tanja; Jayasinghe, Mevan; Gerhart, Barry; Kühlmann, Torsten M

    2014-11-01

    Our article develops a conceptual framework based primarily on national culture perspectives but also incorporating the role of managerial discretion (cultural tightness-looseness, institutional flexibility), which is aimed at achieving a better understanding of how the effectiveness of high-performance work systems (HPWSs) may vary across countries. Based on a meta-analysis of 156 HPWS-business performance effect sizes from 35,767 firms and establishments in 29 countries, we found that the mean HPWS-business performance effect size was positive overall (corrected r = .28) and positive in each country, regardless of its national culture or degree of institutional flexibility. In the case of national culture, the HPWS-business performance relationship was, on average, actually more strongly positive in countries where the degree of a priori hypothesized consistency or fit between an HPWS and national culture (according to national culture perspectives) was lower, except in the case of tight national cultures, where greater a priori fit of an HPWS with national culture was associated with a more positive HPWS-business performance effect size. However, in loose cultures (and in cultures that were neither tight nor loose), less a priori hypothesized consistency between an HPWS and national culture was associated with higher HPWS effectiveness. As such, our findings suggest the importance of not only national culture but also managerial discretion in understanding the HPWS-business performance relationship. (PsycINFO Database Record (c) 2014 APA, all rights reserved). PMID:25222523

  7. Final Assessment of Preindustrial Solid-State Route for High-Performance Mg-System Alloys Production: Concluding the EU Green Metallurgy Project

    NASA Astrophysics Data System (ADS)

    D'Errico, Fabrizio; Plaza, Gerardo Garces; Giger, Franz; Kim, Shae K.

    2013-10-01

    The Green Metallurgy Project, a LIFE+ project co-financed by the European Union Commission, has now been completed. The purpose of the Green Metallurgy Project was to establish and assess a preindustrial process capable of using nanostructured-based high-performance Mg-Zn(Y) magnesium alloys and fully recycled eco-magnesium alloys. In this work, the Consortium presents the final outcome and verification of the completed prototype construction. To compare upstream cradle-to-grave footprints when ternary nanostructured Mg-Y-Zn alloys or recycled eco-magnesium chips are produced during the process cycle using the same equipment, a life cycle analysis was completed following the ISO 14040 methodology. During tests to fine tune the prototype machinery and compare the quality of semifinished bars produced using the scaled up system, the Buhler team produced interesting and significant results. Their tests showed the ternary Mg-Y-Zn magnesium alloys to have a highest specific strength over 6000 series wrought aluminum alloys usually employed in automotive components.

  8. Do they see eye to eye? Management and employee perspectives of high-performance work systems and influence processes on service quality.

    PubMed

    Liao, Hui; Toya, Keiko; Lepak, David P; Hong, Ying

    2009-03-01

    Extant research on high-performance work systems (HPWSs) has primarily examined the effects of HPWSs on establishment or firm-level performance from a management perspective in manufacturing settings. The current study extends this literature by differentiating management and employee perspectives of HPWSs and examining how the two perspectives relate to employee individual performance in the service context. Data collected in three phases from multiple sources involving 292 managers, 830 employees, and 1,772 customers of 91 bank branches revealed significant differences between management and employee perspectives of HPWSs. There were also significant differences in employee perspectives of HPWSs among employees of different employment statuses and among employees of the same status. Further, employee perspective of HPWSs was positively related to individual general service performance through the mediation of employee human capital and perceived organizational support and was positively related to individual knowledge-intensive service performance through the mediation of employee human capital and psychological empowerment. At the same time, management perspective of HPWSs was related to employee human capital and both types of service performance. Finally, a branch's overall knowledge-intensive service performance was positively associated with customer overall satisfaction with the branch's service. PMID:19271796

  9. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    PubMed

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance. PMID:21967297

  10. Determination of four water-soluble compounds in Salvia miltiorrhiza Bunge by high-performance liquid chromatography with a coulometric electrode array system.

    PubMed

    Ma, Lijuan; Zhang, Xuezhu; Guo, Hui; Gan, Yiru

    2006-04-01

    A method has been developed to determine the four water-soluble components-Danshensu (I), protocatechuic acid (II), protocatechuic aldehyde (III) and salvianolic acid B (IV) in Chinese medicine plant Salvia miltiorrhiza Bunge using high-performance liquid chromatography with a coulometric electrode array detection (HPLC-CEAD) system. Heat reflux extraction was used to pretreat the sample. This analysis was carried on a column of Hypersil C18 (250 mm x 4.6 mm, 5 microm) with a mobile phase of sodium acetate (pH 2.5, 50 mM) and acetonitrile in gradient mode. An ESA electrochemical detector monitored the four compounds. Potentials of four electrodes in series were set at 100, 150, 200 and 250 mV, respectively. Optimization of the pH of mobile phase and the proportion of acetonitrile were also performed. Calibration curve showed good linearity with correlation coefficients (r) more than 0.9937. Average recoveries of the four compounds were more than 92% and relative standard deviations were less than 6.6%. This method appeared to be stable, sensitive and reproducible for determination of the four water-soluble compounds in Chinese medicine plant S. miltiorrhiza Bunge. PMID:16500160

  11. Engineering development of coal-fired high performance power systems, Phases 2 and 3. Quarterly progress report, October 1--December 31, 1996. Final report

    SciTech Connect

    1996-12-31

    The goals of this program are to develop a coal-fired high performance power generation system (HIPPS) by the year 2000 that is capable of: {gt} 47% efficiency (HHV); NO{sub x}, SO{sub x}, and particulates {gt} 10% NSPS; coal providing {ge} 65% of heat input; all sold wastes benign; and cost of electricity 90% of present plant. Work reported herein is from Task 1.3 HIPPS Commercial Plant Design, Task 2,2 HITAF Air Heater, and Task 2.4 Duct Heater Design. The impact on cycle efficiency from the integration of various technology advances is presented. The criteria associated with a commercial HIPPS plant design as well as possible environmental control options are presented. The design of the HITAF air heaters, both radiative and convective, is the most critical task in the program. In this report, a summary of the effort associated with the radiative air heater designs that have been considered is provided. The primary testing of the air heater design will be carried out in the UND/EERC pilot-scale furnace; progress to date on the design and construction of the furnace is a major part of this report. The results of laboratory and bench scale activities associated with defining slag properties are presented. Correct material selection is critical for the success of the concept; the materials, both ceramic and metallic, being considered for radiant air heater are presented. The activities associated with the duct heater are also presented.

  12. Automated analysis of fluvoxamine in rat plasma using a column-switching system and ion-pair high-performance liquid chromatography.

    PubMed

    Liu, Shicheng; Shinkai, Norihiro; Kakubari, Ikuhiro; Saitoh, Hideo; Noguchi, Ken-ichi; Saitoh, Takashi; Yamauchi, Hitoshi

    2008-12-01

    We have established a robust, fully automated analytical method for the analysis of fluvoxamine in rat plasma using a column-switching ion-pair high-performance chromatography system. The plasma sample was injected onto a precolumn packed with Shim-pack MAYI-ODS (50 microm), where the drug was automatically purified and enriched by on-line solid-phase extraction. After elution of the plasma proteins, the analyte was back-flushed from the precolumn and then separated isocratically on a reversed-phase C18 column (L-column ODS) with a mobile phase (acetonitrile-0.1% phosphoric acid, 36:64, v/v) containing 2 mM sodium 1-octanesulfonate. The analyte was monitored by a UV detector at a wavelength of 254 nm. The calibration line for fluvoxamine showed good linearity in the range of 5-5000 ng/mL (r > 0.999) with the limit of quantification of 5 ng/mL (RSD = 6.51%). Accuracy ranged from -2.94 to 4.82%, and the within- and between-day precision of the assay was better than 8% across the calibration range. The analytical sensitivity and accuracy of this assay is suitable for characterization of the pharmacokinetics of orally-administered fluvoxamine in rats. PMID:18655223

  13. Design and implementation of an automated liquid-phase microextraction-chip system coupled on-line with high performance liquid chromatography.

    PubMed

    Li, Bin; Petersen, Nickolaj Jacob; Payán, María D Ramos; Hansen, Steen Honoré; Pedersen-Bjergaard, Stig

    2014-03-01

    An automated liquid-phase microextraction (LPME) device in a chip format has been developed and coupled directly to high performance liquid chromatography (HPLC). A 10-port 2-position switching valve was used to hyphenate the LPME-chip with the HPLC autosampler, and to collect the extracted analytes, which then were delivered to the HPLC column. The LPME-chip-HPLC system was completely automated and controlled by the software of the HPLC instrument. The performance of this system was demonstrated with five alkaloids i.e. morphine, codeine, thebaine, papaverine, and noscapine as model analytes. The composition of the supported liquid membrane (SLM) and carrier was optimized in order to achieve reasonable extraction performance of all the five alkaloids. With 1-octanol as SLM solvent and with 25 mM sodium octanoate as anionic carrier, extraction recoveries for the different opium alkaloids ranged between 17% and 45%. The extraction provided high selectivity, and no interfering peaks in the chromatograms were observed when applied to human urine samples spiked with alkaloids. The detection limits using UV-detection were in the range of 1-21 ng/mL for the five opium alkaloids presented in water samples. The repeatability was within 5.0-10.8% (RSD). The membrane liquid in the LPME-chip was regenerated automatically between every third injection. With this procedure the liquid membrane in the LPME-chip was stable in 3-7 days depending on the complexity of sample solutions with continuous operation. With this LPME-chip-HPLC system, series of samples were automatically injected, extracted, separated, and detected without any operator interaction. PMID:24468363

  14. Activities on Realization of High-Power and Steady-State ECRH System and Achievement of High Performance Plasmas in LHD

    SciTech Connect

    Shimozuma, T.; Kubo, S.; Yoshimura, Y.; Igami, H.; Takahashi, H.; Ikeda, R.; Tamura, N.; Kobayashi, S.; Ito, S.; Mizuno, Y.; Takita, Y.; Mutoh, T.; Minami, R.; Kariya, T.; Imai, T.; Idei, H.; Shapiro, M. A.; Temkin, R. J.; Felici, F.; Goodman, T.

    2009-11-26

    Electron Cyclotron Resonance Heating (ECRH) has contributed to the achievement of high performance plasma production, high electron temperature plasmas and sustainment of steady-state plasmas in the Large Helical Device (LHD). Our immediate targets of upgrading the ECRH system are 5 MW several seconds and 1 MW longer than one hour power injection into LHD. The improvement will greatly extend the plasma parameter regime. For that purpose, we have been promoting the development and installation of 77 GHz/1-1.5 MW/several seconds and 0.3 MW/CW gyrotrons in collaboration with University of Tsukuba. The transmission lines are re-examined and improved for high and CW power transmission. In the recent experimental campaign, two 77 GHz gyrotrons were operated. One more gyrotron, which was designed for 1.5 MW/2 s output, was constructed and is tested. We have been promoting to improve total ECRH efficiency for efficient gyrotron-power use and efficient plasma heating, e.g. a new waveguide alignment method and mode-content analysis and the feedback control of the injection polarization. In the last experimental campaign, the 77 GHz gyrotrons were used in combination with the existing 84 GHz range and 168 GHz gyrotrons. Multi-frequency ECRH system is more flexible in plasma heating experiments and diagnostics. A lot of experiments have been performed in relation to high electron temperature plasmas by realization of the core electron-root confinement (CERC), electron cyclotron current drive (ECCD), Electron Bernstein Wave heating, and steady-state plasma sustainment. Some of the experimental results are briefly described.

  15. Tough high performance composite matrix

    NASA Technical Reports Server (NTRS)

    Pater, Ruth H. (Inventor); Johnston, Norman J. (Inventor)

    1994-01-01

    This invention is a semi-interpentrating polymer network which includes a high performance thermosetting polyimide having a nadic end group acting as a crosslinking site and a high performance linear thermoplastic polyimide. Provided is an improved high temperature matrix resin which is capable of performing in the 200 to 300 C range. This resin has significantly improved toughness and microcracking resistance, excellent processability, mechanical performance, and moisture and solvent resistances.

  16. Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation

    NASA Astrophysics Data System (ADS)

    Uneri, Ali; Schafer, Sebastian; Mirota, Daniel; Nithiananthan, Sajendra; Otake, Yoshito; Reaungamornrat, Sureerat; Yoo, Jongheun; Stayman, J. Webster; Reh, Douglas; Gallia, Gary L.; Khanna, A. Jay; Hager, Gregory; Taylor, Russell H.; Kleinszig, Gerhard; Siewerdsen, Jeffrey H.

    2011-03-01

    the development of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.

  17. PEGylated hybrid ytterbia nanoparticles as high-performance diagnostic probes for in vivo magnetic resonance and X-ray computed tomography imaging with low systemic toxicity

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Pu, Fang; Liu, Jianhua; Jiang, Liyan; Yuan, Qinghai; Li, Zhengqiang; Ren, Jinsong; Qu, Xiaogang

    2013-05-01

    Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material. Compared with routinely used Iobitridol in clinic, our PEG-Yb2O3:Gd nanoparticles could provide much significantly enhanced contrast upon various clinical voltages ranging from 80 kVp to 140 kVp owing to the high atomic number and well-positioned K-edge energy of ytterbium. By the doping of gadolinium, our nanoparticulate contrast agent could perform perfect MR imaging simultaneously, revealing similar organ enrichment and bio-distribution with the CT imaging results. The super improvement in imaging efficiency was mainly attributed to the high content of Yb and Gd in a single nanoparticle, thus making these nanoparticles suitable for dual-modal diagnostic imaging with a low single-injection dose. In addition, detailed toxicological study in vitro and in vivo indicated that uniformly sized PEG-Yb2O3:Gd nanoparticles possessed excellent biocompatibility and revealed overall safety.Novel nanoparticulate contrast agents with low systemic toxicity and inexpensive character have exhibited more advantages over routinely used small molecular contrast agents for the diagnosis and prognosis of disease. Herein, we designed and synthesized PEGylated hybrid ytterbia nanoparticles as high-performance nanoprobes for X-ray computed tomography (CT) imaging and magnetic resonance (MR) imaging both in vitro and in vivo. These well-defined nanoparticles were facile to prepare and cost-effective, meeting the criteria as a biomedical material

  18. Parallel implementation of inverse adding-doubling and Monte Carlo multi-layered programs for high performance computing systems with shared and distributed memory

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav; Li, Changying

    2015-09-01

    Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.

  19. Simulation of reconfigurable multifunctional continuous logic devices as advanced components of the next generation high-performance MIMO-systems for the processing and interconnection

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2013-12-01

    We consider design and modeling of hardware realizations of reconfigurable multifunctional continuous logic devices (R MCL D) as advanced components of the next generation high-performance MIMO-systems for the processing and interconnection. The R MCL D realize function of two-valued and continuous logics with current inputs and current outputs on the basis of CMOS current mirrors and circuits which realize the limited difference functions. We show advantages of such elements consisting in encoding of variables by the photocurrent levels, that allows easily providing optical inputs (by photo-detectors (PD)) and optical outputs (by LED). The conception of construction of R MCL D consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 55÷65 transistors, 1 PD and 1 LED makes the offered circuits quite compact and allows their integration in 1D and 2D arrays. In the presentation we consider the capabilities of the offered circuits, show the simulation results and possible prospects of application of the circuits in particular for time-pulse coding for multivalued, continuous, neuro-fuzzy and matrix logics. The simulation results of NOT, MIN, MAX, equivalence (EQ) and other functions, that implemented R MCL D, showed that the level of logical variables can change from 1 μA to 10 μA for low-power consumption variants. The base cell of the R MCL D have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V. Modeling of such cells in OrCad is made.

  20. Prospective Randomized Controlled Study on the Efficacy of Multimedia Informed Consent for Patients Scheduled to Undergo Green-Light High-Performance System Photoselective Vaporization of the Prostate

    PubMed Central

    Ham, Dong Yeub; Choi, Woo Suk; Song, Sang Hoon; Ahn, Young-Joon; Park, Hyoung Keun; Kim, Hyeong Gon

    2016-01-01

    Purpose The aim of this study was to evaluate the efficacy of a multimedia informed consent (IC) presentation on the understanding and satisfaction of patients who were scheduled to receive 120-W green-light high-performance system photoselective vaporization of the prostate (HPS-PVP). Materials and Methods A multimedia IC (M-IC) presentation for HPS-PVP was developed. Forty men with benign prostatic hyperplasia who were scheduled to undergo HPS-PVP were prospectively randomized to a conventional written IC group (W-IC group, n=20) or the M-IC group (n=20). The allocated IC was obtained by one certified urologist, followed by a 15-question test (maximum score, 15) to evaluate objective understanding, and questionnaires on subjective understanding (range, 0~10) and satisfaction (range, 0~10) using a visual analogue scale. Results Demographic characteristics, including age and the highest level of education, did not significantly differ between the two groups. No significant differences were found in scores reflecting the objective understanding of HPS-PVP (9.9±2.3 vs. 10.6±2.8, p=0.332) or in subjective understanding scores (7.5±2.1 vs. 8.6±1.7, p=0.122); however, the M-IC group showed higher satisfaction scores than the W-IC group (7.4±1.7 vs. 8.4±1.5, p=0.033). After adjusting for age and educational level, the M-IC group still had significantly higher satisfaction scores. Conclusions M-IC did not enhance the objective knowledge of patients regarding this surgical procedure. However, it improved the satisfaction of patients with the IC process itself. PMID:27169129

  1. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  2. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems. PMID:26872796

  3. High performance computing at Sandia National Labs

    SciTech Connect

    Cahoon, R.M.; Noe, J.P.; Vandevender, W.H.

    1995-10-01

    Sandia`s High Performance Computing Environment requires a hierarchy of resources ranging from desktop, to department, to centralized, and finally to very high-end corporate resources capable of teraflop performance linked via high-capacity Asynchronous Transfer Mode (ATM) networks. The mission of the Scientific Computing Systems Department is to provide the support infrastructure for an integrated corporate scientific computing environment that will meet Sandia`s needs in high-performance and midrange computing, network storage, operational support tools, and systems management. This paper describes current efforts at SNL/NM to expand and modernize centralized computing resources in support of this mission.

  4. CLUPI, a high-performance imaging system on the ESA-NASA rover of the 2018 ExoMars mission to discover biofabrics on Mars

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Westall, F.; Hofmann, B. A.; Spray, J. G.; Cockell, C.; Kempe, S.; Griffiths, A. D.; De Sanctis, M. C.; Colangeli, L.; Koschny, D.; Pullan, D.; Föllmi, K.; Diamond, L.; Josset, M.; Javaux, E.; Esposito, F.; Barnes, D.

    2012-04-01

    The scientific objectives of the ESA-NASA rover of the 2018 mission of the ExoMars Programme are to search for traces of past or present life and to characterise the near-sub surface. Both objectives require study of the rock/regolith materials in terms of structure, textures, mineralogy, and elemental and organic composition. The 2018 rover ExoMars payload consists of a suite of complementary instruments designed to reach these objectives. CLUPI, the high-performance colour close up imager, on board the 2018 ESA-NASA Rover plays an important role in attaining the mission objectives: it is the equivalent of the hand lens that no geologist is without when undertaking field work. CLUPI is a powerful, highly integrated miniaturized (<700g) low-power robust imaging system, able to operate at very low temperatures (-120°C). CLUPI has a working distance from 10cm to infinite providing outstanding pictures with a color detector of 2652x1768. At 10cm, the resolution is 7 micrometer/pixel in color. The focus mechanism and the optical-mechanical interface are a smart assembly in titanium that can sustain a wide temperature range. The concept benefits from well-proven heritage: Proba, Rosetta, MarsExpress and Smart-1 missions… Because the main science objective of ExoMars concerns the search for life, whose traces on Mars are likely to be cryptic, close up observation of the rocks and granular regolith will be critical to the decision as whether to drill and sample the nearby underlying materials. Thus, CLUPI is the essential final step in the choice of drill site. But not only are CLUPI's observations of the rock outcrops important, but they also serve other purposes. CLUPI, could observe the placement of the drill head. It will also be able to observe the fines that come out of the drill hole, including any colour stratification linked to lithological changes with depth. Finally, CLUPI will provide detailed observation of the surface of the core drilled materials when

  5. High-performance membrane chromatography.

    PubMed

    Belenkii, B G; Malt'sev, V G

    1995-02-01

    In gradient chromatography for proteins migrating along the chromatographic column, the critical distance X0 has been shown to exist at which the separation of zones is at a maximum and band spreading is at a minimum. With steep gradients and small elution velocity, the column length may be reduced to the level of membrane thickness--about one millimeter. The peculiarities of this novel separation method for proteins, high-performance membrane chromatography (HPMC), are discussed and stepwise elution is shown to be especially effective. HPMC combines the advantages of membrane technology and high-performance liquid chromatography, and avoids their drawbacks. PMID:7727132

  6. High Performance Photovoltaic Project Overview

    SciTech Connect

    Symko-Davies, M.; McConnell, R.

    2005-01-01

    The High-Performance Photovoltaic (HiPerf PV) Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and environment in the 21st century. To accomplish this, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. In this paper, we describe the recent research accomplishments in the in-house directed efforts and the research efforts under way in the subcontracted area.

  7. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  8. High-Performance Bipropellant Engine

    NASA Technical Reports Server (NTRS)

    Biaglow, James A.; Schneider, Steven J.

    1999-01-01

    TRW, under contract to the NASA Lewis Research Center, has successfully completed over 10 000 sec of testing of a rhenium thrust chamber manufactured via a new-generation powder metallurgy. High performance was achieved for two different propellants, N2O4- N2H4 and N2O4 -MMH. TRW conducted 44 tests with N2O4-N2H4, accumulating 5230 sec of operating time with maximum burn times of 600 sec and a specific impulse Isp of 333 sec. Seventeen tests were conducted with N2O4-MMH for an additional 4789 sec and a maximum Isp of 324 sec, with a maximum firing duration of 700 sec. Together, the 61 tests totalled 10 019 sec of operating time, with the chamber remaining in excellent condition. Of these tests, 11 lasted 600 to 700 sec. The performance of radiation-cooled rocket engines is limited by their operating temperature. For the past two to three decades, the majority of radiation-cooled rockets were composed of a high-temperature niobium alloy (C103) with a disilicide oxide coating (R512) for oxidation resistance. The R512 coating practically limits the operating temperature to 1370 C. For the Earth-storable bipropellants commonly used in satellite and spacecraft propulsion systems, a significant amount of fuel film cooling is needed. The large film-cooling requirement extracts a large penalty in performance from incomplete mixing and combustion. A material system with a higher temperature capability has been matured to the point where engines are being readied for flight, particularly the 100-lb-thrust class engine. This system has powder rhenium (Re) as a substrate material with an iridium (Ir) oxidation-resistant coating. Again, the operating temperature is limited by the coating; however, Ir is capable of long-life operation at 2200 C. For Earth-storable bipropellants, this allows for the virtual elimination of fuel film cooling (some film cooling is used for thermal control of the head end). This has resulted in significant increases in specific impulse performance

  9. Panelized high performance multilayer insulation

    NASA Technical Reports Server (NTRS)

    Burkley, R. A.; Shriver, C. B.; Stuckey, J. M.

    1968-01-01

    Multilayer insulation coverings with low conductivity foam spacers are interleaved with quarter mil aluminized polymer film radiation shields to cover flight type liquid hydrogen tankage of space vehicles with a removable, structurally compatible, lightweight, high performance cryogenic insulation capable of surviving extended space mission environments.

  10. High performance rolling element bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Jr., Roger W. (Inventor); Olinger, Jr., John B. (Inventor); Owen, Samuel S. (Inventor); Poole, William E. (Inventor); Haluck, David A. (Inventor)

    1993-01-01

    A high performance rolling element bearing (5) which is particularly suitable for use in a cryogenically cooled environment, comprises a composite cage (45) formed from glass fibers disposed in a solid lubricant matrix of a fluorocarbon polymer. The cage includes inserts (50) formed from a mixture of a soft metal and a solid lubricant such as a fluorocarbon polymer.

  11. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  12. High performance bio-integrated devices

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  13. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  14. The design and use of a simple System Suitability Test Mix for generic reverse phase high performance liquid chromatography-mass spectrometry systems and the implications for automated system monitoring using global software tracking.

    PubMed

    Mutton, Ian; Boughtflower, Bob; Taylor, Nick; Brooke, Daniel

    2011-06-10

    The development of a seven-component test mixture designed for use with a generic gradient and a reversed-phase high performance liquid chromatography-mass spectrometry (RP-HPLC-MS) system is discussed. Unlike many test mixtures formulated in order to characterise column quality at neutral pH, the test mixture reported here was designed to permit an overall suitability assessment of the whole liquid chromatography-mass spectrometry (LCMS) system. The mixture is designed to test the chromatographic performance of the column as well as certain aspects of the performance of the individual instrumental components of the system. The System Suitability Test Mix can be used for low and high pH generic reverse phase LCMS analysis. Four phthalates are used: diethyl phthalate (DEP), diamyl phthalate (DAP), di-n-hexyl phthalate (DHP) and dioctyl phthalate (DOP). Three other probes are employed: 8-bromoguanosine (8-BG), amitryptyline (Ami), and 4-chlorocinnamic acid (4-CCA). We show that analysis of this test mixture can alert the user when any part of the system (instrument or column) contributes to loss of overall performance and may require remedial action and demonstrate that it can provide information that enables us to document data quality control. PMID:21543072

  15. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  16. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  17. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  18. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  19. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  20. DOE research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  1. Ultra-Sensitive Elemental Analysis Using Plasmas 5.Speciation of Arsenic Compounds in Biological Samples by High Performance Liquid Chromatography-Inductively Coupled Plasma Mass Spectrometry System

    NASA Astrophysics Data System (ADS)

    Kaise, Toshikazu

    Arsenic originating from the lithosphere is widely distributed in the environment. Many arsenicals in the environment are in organic and methylated species. These arsenic compounds in drinking water or food products of marine origin are absorbed in human digestive tracts, metabolized in the human body, and excreted viatheurine. Because arsenic shows varying biological a spects depending on its chemical species, the biological characteristics of arsenic must be determined. It is thought that some metabolic pathways for arsenic and some arsenic circulation exist in aqueous ecosystems. In this paper, the current status of the speciation analysis of arsenic by HPLC/ICP-MS (High Performance Liquid Chromatography-Inductively Coupled Plasma Mass spectrometry) in environmental and biological samples is summarized using recent data.

  2. High-Performance Thermoelectric Semiconductors

    NASA Technical Reports Server (NTRS)

    Fleurial, Jean-Pierre; Caillat, Thierry; Borshchevsky, Alexander

    1994-01-01

    Figures of merit almost double current state-of-art thermoelectric materials. IrSb3 is semiconductor found to exhibit exceptional thermoelectric properties. CoSb3 and RhSb3 have same skutterudite crystallographic structure as IrSb3, and exhibit exceptional transport properties expected to contribute to high thermoelectric performance. These three compounds form solid solutions. Combination of properties offers potential for development of new high-performance thermoelectric materials for more efficient thermoelectric power generators, coolers, and detectors.

  3. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  4. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  5. Determination of boron at sub-ppm levels in uranium oxide and aluminum by hyphenated system of complex formation reaction and high-performance liquid chromatography (HPLC).

    PubMed

    Rao, Radhika M; Aggarwal, Suresh K

    2008-04-15

    Boron, at sub-ppm levels, in U3O8 powder and aluminum metal, was determined using complex formation and dynamically modified reversed-phase high-performance liquid chromatography (RP-HPLC). Curcumin was used for complexing boron extracted with 2-ethyl-1,3-hexane diol (EHD). Separation of complex from excess reagent and thereafter its determination using the online diode array detector (DAD) was carried out by HPLC. Calibration curve was found to be linear for boron amounts in the sample ranging from 0.02 microg to 0.5 microg. Precision of about 10% was achieved for B determination in samples containing less than 1 ppmw of boron. The values obtained by HPLC were in good agreement with the data available from other analytical techniques. The precision in the data obtained by HPLC was much better compared to that reported by other techniques. The present hyphenated methodology of HPLC and complex formation reaction is interesting because of cost performance, simplicity, versatility and availability when compared to other spectroscopic techniques like ICP-MS and ICP-AES. PMID:18371924

  6. High performance storable propellant resistojet

    NASA Technical Reports Server (NTRS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  7. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  8. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  9. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  10. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  11. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  12. High Performance Solution Processable TFTs

    NASA Astrophysics Data System (ADS)

    Gundlach, David

    2008-03-01

    Organic-based electronic devices offer the potential to significantly impact the functionality and pervasiveness of large-area electronics. We report on soluble acene-based organic thin film transistors (OTFTs) where the microstructure of as-cast films can be precisely controlled via interfacial chemistry. Chemically tailoring the source/drain contact interface is a novel route to self-patterning of soluble small molecule organic semiconductors and enables the growth of highly ordered regions along opposing contact edges which extend into the transistor channel. The unique film forming properties of soluble fluorinated anthradithiophenes allows us to fabricate high performance OTFTs, OTFT circuits, and to deterministically study the influence of the film microstructure on the electrical characteristics of devices. Most recently we have grown single crystals of soluble fluorinated anthradithiophenes by vapor transport method allowing us to probe deeper into their intrinsic properties and determine the potential and limitations of this promising family of oligomers for use in organic-based electronic devices. Co-Authors: O. D. Jurchescu^1,4, B. H. Hamadani^1, S. K. Park^4, D. A. Mourey^4, S. Subramanian^5, A. J. Moad^2, R. J. Kline^3, L. C. Teague^2, J. G. Kushmerick^2, L. J. Richter^2, T. N. Jackson^4, and J. E. Anthony^5 ^1Semiconductor Electronics Division, ^2Surface and Microanalysis Science Division, ^3Polymers Division, National Institute of Standards and Technology, Gaithersburg, MD 20899 ^4Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802 ^5Department of Chemistry, University of Kentucky, Lexington, KY 40506-0055

  13. DOE High Performance Concentrator PV Project

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2005-08-01

    Much in demand are next-generation photovoltaic (PV) technologies that can be used economically to make a large-scale impact on world electricity production. The U.S. Department of Energy (DOE) initiated the High-Performance Photovoltaic (HiPerf PV) Project to substantially increase the viability of PV for cost-competitive applications so that PV can contribute significantly to both our energy supply and environment. To accomplish such results, the National Center for Photovoltaics (NCPV) directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices with the goal of enabling progress of high-efficiency technologies toward commercial-prototype products. We will describe the details of the subcontractor and in-house progress in exploring and accelerating pathways of III-V multijunction concentrator solar cells and systems toward their long-term goals. By 2020, we anticipate that this project will have demonstrated 33% system efficiency and a system price of $1.00/Wp for concentrator PV systems using III-V multijunction solar cells with efficiencies over 41%.

  14. Development of a high-performance coal-fired power generating system with pyrolysis gas and char-fired high temperature furnace (HITAF)

    SciTech Connect

    Not Available

    1992-11-01

    A concept for an advanced coal-fired combined-cycle power generating system is currently being developed. The first phase of this three-phase program consists of conducting the necessary research and development to define the system, evaluate the economic and technical feasibility of the concept, and prepare an R D plan to develop the concept further. Foster Wheeler Development Corporation is leading a team ofcompanies involved in this effort. The system proposed to meet these goals is a combined-cycle system where air for a gas turbine is indirectly heated to approximately 1800[degrees]F in furnaces fired with cool-derived fuels and then directly heated in a natural-gas-fired combustor up to about 2400[degrees]F. The system is based on a pyrolyzing process that converts the coal into a low-Btu fuel gas and char. The fuelgas is a relatively clean fuel, and it is fired to heat tube surfaces that are susceptible to corrosion and problems from ash deposition. In particular, the high-temperature air heater tubes, which will need tobe a ceramic material, will be located in a separate furnace or region of a furnace that is exposed to combustion products from the low-Btu fuel gas only. A simplified process flow diagram is shown.

  15. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  16. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  17. The High Performance of Dutch and Flemish 15-Year-Old Native Pupils: Explaining Country Differences in Math Scores between Highly Stratified Educational Systems

    ERIC Educational Resources Information Center

    Prokic-Breuer, Tijana; Dronkers, Jaap

    2012-01-01

    This paper aims to explain the high scores of 15-year-old native pupils in The Netherlands and Flanders by comparing them with the scores of pupils in countries with the same highly stratified educational system: Wallonia, the German "Lander," the Swiss German cantons, and Austria. We use the data from the Programme for International Pupil…

  18. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  19. High Performance Database Management for Earth Sciences

    NASA Technical Reports Server (NTRS)

    Rishe, Naphtali; Barton, David; Urban, Frank; Chekmasov, Maxim; Martinez, Maria; Alvarez, Elms; Gutierrez, Martha; Pardo, Philippe

    1998-01-01

    The High Performance Database Research Center at Florida International University is completing the development of a highly parallel database system based on the semantic/object-oriented approach. This system provides exceptional usability and flexibility. It allows shorter application design and programming cycles and gives the user control via an intuitive information structure. It empowers the end-user to pose complex ad hoc decision support queries. Superior efficiency is provided through a high level of optimization, which is transparent to the user. Manifold reduction in storage size is allowed for many applications. This system allows for operability via internet browsers. The system will be used for the NASA Applications Center program to store remote sensing data, as well as for Earth Science applications.

  20. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  1. A Novel Low-Power, High-Performance, Zero-Maintenance Closed-Path Trace Gas Eddy Covariance System with No Water Vapor Dilution or Spectroscopic Corrections

    NASA Astrophysics Data System (ADS)

    Sargent, S.; Somers, J. M.

    2015-12-01

    Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.

  2. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    NASA Astrophysics Data System (ADS)

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-02-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min-1, while the TOF in the conventional batch reaction was 0.643 min-1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes.

  3. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance

    PubMed Central

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-01-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min−1, while the TOF in the conventional batch reaction was 0.643 min−1. The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes. PMID:26902657

  4. Mussel-inspired Functionalization of Cotton for Nano-catalyst Support and Its Application in a Fixed-bed System with High Performance.

    PubMed

    Xi, Jiangbo; Xiao, Junwu; Xiao, Fei; Jin, Yunxia; Dong, Yue; Jing, Feng; Wang, Shuai

    2016-01-01

    Inspired by the composition of adhesive and reductive proteins secreted by marine mussels, polydopamine (PDA) was used to coat cotton microfiber (CMF), and then acted as reducing agent for the growth of Pd nanoparticles on PDA coated CMF (PDA@CMF) composites. The resultant CMF@PDA/Pd composites were then packed in a column for the further use in fixed-bed system. For the catalysis of the reduction of 4-nitrophenol, the flow rate of the 4-aminophenol solution (0.5 mM) was as high as 60 mL/min. The obtained fixed-bed system even exhibited superior performance to conventional batch reaction process because it greatly facilitated the efficiency of the catalytic fibers. Consequently, its turnover frequency (TOF) was up to 1.587 min(-1), while the TOF in the conventional batch reaction was 0.643 min(-1). The catalytic fibers also showed good recyclability, which can be recycled for nine successive cycles without a loss of activity. Furthermore, the catalytic system based on CMF@PDA/Pd can also be applied for Suzuki coupling reaction with the iodobenzene conversion up to 96.7%. The strategy to prepare CMF@PDA/Pd catalytic fixed bed was simple, economical and scalable, which can also be applied for coating different microfibers and loading other noble metal nanoparticles, was amenable for automated industrial processes. PMID:26902657

  5. Small-Scale High-Performance Optics

    SciTech Connect

    WILSON, CHRISTOPHER W.; LEGER, CHRIS L.; SPLETZER, BARRY L.

    2002-06-01

    Historically, high resolution, high slew rate optics have been heavy, bulky, and expensive. Recent advances in MEMS (Micro Electro Mechanical Systems) technology and micro-machining may change this. Specifically, the advent of steerable sub-millimeter sized mirror arrays could provide the breakthrough technology for producing very small-scale high-performance optical systems. For example, an array of steerable MEMS mirrors could be the building blocks for a Fresnel mirror of controllable focal length and direction of view. When coupled with a convex parabolic mirror the steerable array could realize a micro-scale pan, tilt and zoom system that provides full CCD sensor resolution over the desired field of view with no moving parts (other than MEMS elements). This LDRD provided the first steps towards the goal of a new class of small-scale high-performance optics based on MEMS technology. A large-scale, proof of concept system was built to demonstrate the effectiveness of an optical configuration applicable to producing a small-scale (< 1cm) pan and tilt imaging system. This configuration consists of a color CCD imager with a narrow field of view lens, a steerable flat mirror, and a convex parabolic mirror. The steerable flat mirror directs the camera's narrow field of view to small areas of the convex mirror providing much higher pixel density in the region of interest than is possible with a full 360 deg. imaging system. Improved image correction (dewarping) software based on texture mapping images to geometric solids was developed. This approach takes advantage of modern graphics hardware and provides a great deal of flexibility for correcting images from various mirror shapes. An analytical evaluation of blur spot size and axi-symmetric reflector optimization were performed to address depth of focus issues that occurred in the proof of concept system. The resulting equations will provide the tools for developing future system designs.

  6. Facilitating NASA's Use of GEIA-STD-0005-1, Performance Standard for Aerospace and High Performance Electronic Systems Containing Lead-Free Solder

    NASA Technical Reports Server (NTRS)

    Plante, Jeannete

    2010-01-01

    GEIA-STD-0005-1 defines the objectives of, and requirements for, documenting processes that assure customers and regulatory agencies that AHP electronic systems containing lead-free solder, piece parts, and boards will satisfy the applicable requirements for performance, reliability, airworthiness, safety, and certify-ability throughout the specified life of performance. It communicates requirements for a Lead-Free Control Plan (LFCP) to assist suppliers in the development of their own Plans. The Plan documents the Plan Owner's (supplier's) processes, that assure their customer, and all other stakeholders that the Plan owner's products will continue to meet their requirements. The presentation reviews quality assurance requirements traceability and LFCP template instructions.

  7. A radio-high-performance liquid chromatography dual-flow cell gamma-detection system for on-line radiochemical purity and labeling efficiency determination.

    PubMed

    Lindegren, S; Jensen, H; Jacobsson, L

    2014-04-11

    In this study, a method of determining radiochemical yield and radiochemical purity using radio-HPLC detection employing a dual-flow-cell system is evaluated. The dual-flow cell, consisting of a reference cell and an analytical cell, was constructed from two PEEK capillary coils to fit into the well of a NaI(Tl) detector. The radio-HPLC flow was directed from the injector to the reference cell allowing on-line detection of the total injected sample activity prior to entering the HPLC column. The radioactivity eluted from the column was then detected in the analytical cell. In this way, the sample will act as its own standard, a feature enabling on-line quantification of the processed radioactivity passing through the system. All data were acquired on-line via an analog signal from a rate meter using chromatographic software. The radiochemical yield and recovery could be simply and accurately determined by integration of the peak areas in the chromatogram obtained from the reference and analytical cells using an experimentally determined volume factor to correct for the effect of different cell volumes. PMID:24630054

  8. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  9. High-performance parallel input device

    NASA Astrophysics Data System (ADS)

    Daniel, R. W.; Fischer, Patrick J.; Hunter, B.

    1993-12-01

    Research into force reflecting remote manipulation has recently started to move away from common error systems towards explicit force control. In order to maximize the benefit provided by explicit force reflection the designer has to take into account the asymmetry of the bandwidths of the forward and reflecting loops. This paper reports on a high performance system designed and built at Oxford University and Harwell Laboratories and on the preliminary results achieved when performing simple force reflecting tasks. The input device is based on a modified Stewart Platform, which offers the potential of very high bandwidth force reflection, well above the normal 2 - 10 Hz range achieved with common error systems. The slave is a nuclear hardened Puma industrial robot, offering a low cost, reliable solution to remote manipulation tasks.

  10. Achieving high performance on the Intel Paragon

    SciTech Connect

    Greenberg, D.S.; Maccabe, B.; Riesen, R.; Wheat, S.; Womble, D.

    1993-11-01

    When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from the nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.

  11. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  12. High-Performance Schools Make Cents.

    ERIC Educational Resources Information Center

    Nielsen-Palacios, Christian

    2003-01-01

    Describes the educational benefits of high-performance schools, buildings that are efficient, healthy, safe, and easy to operate and maintain. Also briefly describes how to create a high-performance school drawn from volume I (Planning) of the three-volume Collaborative for High Performance Schools (CHPS) "Best Practices Manual." (For more…

  13. EDITORIAL: High performance under pressure High performance under pressure

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  14. Designing and simulation smart multifunctional continuous logic device as a basic cell of advanced high-performance sensor systems with MIMO-structure

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-01-01

    We have proposed a design and simulation of hardware realizations of smart multifunctional continuous logic devices (SMCLD) as advanced basic cells of the sensor systems with MIMO- structure for images processing and interconnection. The SMCLD realize function of two-valued, multi-valued and continuous logics with current inputs and current outputs. Such advanced basic cells realize function nonlinear time-pulse transformation, analog-to-digital converters and neural logic. We showed advantages of such elements. It's have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level. The conception of construction of SMCLD consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 50÷70 transistors, 1 PD and 1 LED makes the offered circuits quite compact. The simulation results of NOT, MIN, MAX, equivalence (EQ), normalize summation, averaging and other functions, that implemented SMCLD, showed that the level of logical variables can change from 0.1μA to 10μA for low-power consumption variants. The SMCLD have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V.

  15. Advances in Plexcore active layer technology systems for organic photovoltaics: roof-top and accelerated lifetime analysis of high performance organic photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Laird, Darin W.; Vaidya, Swanand; Li, Sergey; Mathai, Mathew; Woodworth, Brian; Sheina, Elena; Williams, Shawn; Hammond, Troy

    2007-09-01

    We report NREL-certified efficiencies and initial lifetime data for organic photovoltaic (OPV) cells based on Plexcore PV photoactive layer and Plexcore HTL-OPV hole transport layer technology. Plexcore PV-F3, a photoactive layer OPV ink, was certified in a single-layer OPV cell at the National Renewable Energy Laboratory (NREL) at 5.4%, which represents the highest official mark for a single-layer organic solar cell. We have fabricated and measured P3HT:PCBM solar cells with a peak efficiency of 4.4% and typical efficiencies of 3 - 4% (internal, NREL-calibrated measurement) with P3HT manufactured at Plextronics by the Grignard Metathesis (GRIM) method. Outdoor and accelerated lifetime testing of these devices is reported. Both Plexcore PV-F3 and P3HT:PCBM-based OPV cells exhibit >750 hours of outdoor roof-top, non-accelerated lifetime with less than 8% loss in initial efficiency for both active layer systems when exposed continuously to the climate of Western Pennsylvania. These devices are continuously being tested to date. Accelerated testing using a high-intensity (1000W) metal-halide lamp affords shorter lifetimes; however, the true acceleration factor is still to be determined.

  16. High-performance, highly bendable MoS2 transistors with high-k dielectrics for flexible low-power systems.

    PubMed

    Chang, Hsiao-Yu; Yang, Shixuan; Lee, Jongho; Tao, Li; Hwang, Wan-Sik; Jena, Debdeep; Lu, Nanshu; Akinwande, Deji

    2013-06-25

    While there has been increasing studies of MoS2 and other two-dimensional (2D) semiconducting dichalcogenides on hard conventional substrates, experimental or analytical studies on flexible substrates has been very limited so far, even though these 2D crystals are understood to have greater prospects for flexible smart systems. In this article, we report detailed studies of MoS2 transistors on industrial plastic sheets. Transistor characteristics afford more than 100x improvement in the ON/OFF current ratio and 4x enhancement in mobility compared to previous flexible MoS2 devices. Mechanical studies reveal robust electronic properties down to a bending radius of 1 mm which is comparable to previous reports for flexible graphene transistors. Experimental investigation identifies that crack formation in the dielectric is the responsible failure mechanism demonstrating that the mechanical properties of the dielectric layer is critical for realizing flexible electronics that can accommodate high strain. Our uniaxial tensile tests have revealed that atomic-layer-deposited HfO2 and Al2O3 films have very similar crack onset strain. However, crack propagation is slower in HfO2 dielectric compared to Al2O3 dielectric, suggesting a subcritical fracture mechanism in the thin oxide films. Rigorous mechanics modeling provides guidance for achieving flexible MoS2 transistors that are reliable at sub-mm bending radius. PMID:23668386

  17. A micro trapping system coupled with a high performance liquid chromatography procedure for methylamine determination in both tissue and cigarette smoke.

    PubMed

    Zhang, Yongqian; Mao, Jian; Yu, Peter H; Xiao, Shengyuan

    2012-11-01

    Both endogenous and exogenous methylamine have been found to be involved in many human disorders. The quantitative assessment of methylamine has drawn considerable interest in recent years. Although there have been many papers about the determination of methylamine, only a few of them involved cigarette smoke or mammalian tissue analysis. The major hurdles of the determination of methylamine are the collection of methylamine from samples and the differentiation of methylamine from the background compounds, e.g., biogenic amines. We have solved this problem using a micro trapping system coupled with an HPLC procedure. The interference from other biogenic amines has been avoided. The high selectivity of this method was achieved using four techniques: distillation, trapping, HPLC separation and selective detection. The chromatograms of both mouse tissues and cigarette smoke are simple, with only a few peaks. The method is easy and efficient and it has been validated and applied to the determination of methylamine in tissues of normal CD 1 mice and cigarette smoke. The methylamine contents were determined to be approximately 268.3 ng g(-1) in the liver, 429.5 ng g(-1) in the kidney and 547.4 ng g(-1) in the brain respectively. The methylamine in the cigarette smoke was approximately 213 ng to 413 ng per cigarette. These results in tissues and in cigarette smoke were found to be consistent with the data in the previous literature. To the best of our knowledge, this is the first report on a method suitable for methylamine analysis in both mammalian tissue and cigarette smoke. PMID:23101659

  18. Durability and complications of photoselective vaporisation of the prostate with the 120W high performance system GreenLight™ lithium triborate laser

    PubMed Central

    Sahibzada, I; Elkabir, J; Feyisetan, O; Izegbu, V; Hellawell, G; Webster, J

    2014-01-01

    Introduction The aim of this study was to examine the durability of photoselective vaporisation of the prostate (PVP) with the 120W GreenLight HPS® laser (American Medical Systems, Minnetonka, MN, US), and to examine the incidence, nature and factors associated with complications from the procedure. Methods Clinical records of PVP patients were reviewed to compare details between patients who developed complications and those who did not. Kaplan–Meier survival curves were used to assess durability. Cox regression was used to examine associations between complications and perioperative factors. Results Successful outcomes were maintained in 84% of 117 patients at the 2-year follow-up appointment. Complication rates were low and comparable with transurethral resection of the prostate (TURP). Complications were developed by 18 patients (15.4%) over a mean follow-up duration of 20.8 months. The most common complications were residual prostate requiring another surgery (5/117, 4.3%) and urethral stricture (4/117, 3.4%). Patients with complications had significantly longer catheterisation duration. Length of hospital stay, lasing energy, pre and postoperative levels of prostate specific antigen (PSA), pre and postoperative maximum flow rate (Qmax), and age at surgery were not found to influence development of complications. Conclusions Results from PVP with an HPS® laser are durable. Complications are low and compare favourably with TURP. Lasing energy, PSA, Qmax, patient age and length of stay are not associated with development of complications. However, a longer postoperative catheterisation after PVP is associated with development of complications. PMID:24992419

  19. A high performance thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Tijani, M. E. H.; Spoelstra, S.

    2011-11-01

    In thermoacoustic systems heat is converted into acoustic energy and vice versa. These systems use inert gases as working medium and have no moving parts which makes the thermoacoustic technology a serious alternative to produce mechanical or electrical power, cooling power, and heating in a sustainable and environmentally friendly way. A thermoacoustic Stirling heat engine is designed and built which achieves a record performance of 49% of the Carnot efficiency. The design and performance of the engine is presented. The engine has no moving parts and is made up of few simple components.

  20. High performance electrolytes for MCFC

    DOEpatents

    Kaun, Thomas D.; Roche, Michael F.

    1999-01-01

    A carbonate electrolyte of the Li/Na or CaBaLiNa system. The Li/Na carbonate has a composition displaced from the eutectic composition to diminish segregation effects in a molten carbonate fuel cell. The CaBaLiNa system includes relatively small amounts of Ca.sub.2 CO.sub.3 and BaCO.sub.3, and preferably of equimolar amounts. The presence of both Ca and BaCO.sub.3 enables lower temperature fuel cell operation.

  1. High performance electrolytes for MCFC

    DOEpatents

    Kaun, T.D.; Roche, M.F.

    1999-08-24

    A carbonate electrolyte of the Li/Na or CaBaLiNa system is described. The Li/Na carbonate has a composition displaced from the eutectic composition to diminish segregation effects in a molten carbonate fuel cell. The CaBaLiNa system includes relatively small amounts of Ca{sub 2}CO{sub 3} and BaCO{sub 3}, and preferably of equimolar amounts. The presence of both Ca and BaCO{sub 3} enables lower temperature fuel cell operation. 15 figs.

  2. EDITORIAL: High performance under pressure High performance under pressure

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2011-11-01

    The accumulation of charge in certain materials in response to an applied mechanical stress was first discovered in 1880 by Pierre Curie and his brother Paul-Jacques. The effect, piezoelectricity, forms the basis of today's microphones, quartz watches, and electronic components and constitutes an awesome scientific legacy. Research continues to develop further applications in a range of fields including imaging [1, 2], sensing [3] and, as reported in this issue of Nanotechnology, energy harvesting [4]. Piezoelectricity in biological tissue was first reported in 1941 [5]. More recently Majid Minary-Jolandan and Min-Feng Yu at the University of Illinois at Urbana-Champaign in the USA have studied the piezoelectric properties of collagen I [1]. Their observations support the nanoscale origin of piezoelectricity in bone and tendons and also imply the potential importance of the shear load transfer mechanism in mechanoelectric transduction in bone. Shear load transfer has been the principle basis of the nanoscale mechanics model of collagen. The piezoelectric effect in quartz causes a shift in the resonant frequency in response to a force gradient. This has been exploited for sensing forces in scanning probe microscopes that do not need optical readout. Recently researchers in Spain explored the dynamics of a double-pronged quartz tuning fork [2]. They observed thermal noise spectra in agreement with a coupled-oscillators model, providing important insights into the system's behaviour. Nano-electromechanical systems are increasingly exploiting piezoresistivity for motion detection. Observations of the change in a material's resistance in response to the applied stress pre-date the discovery of piezoelectric effect and were first reported in 1856 by Lord Kelvin. Researchers at Caltech recently demonstrated that a bridge configuration of piezoresistive nanowires can be used to detect in-plane CMOS-based and fully compatible with future very-large scale integration of

  3. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  4. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. High-Performance Wireless Telemetry

    NASA Technical Reports Server (NTRS)

    Griebeler, Elmer; Nawash, Nuha; Buckley, James

    2011-01-01

    Prior technology for machinery data acquisition used slip rings, FM radio communication, or non-real-time digital communication. Slip rings are often noisy, require much space that may not be available, and require access to the shaft, which may not be possible. FM radio is not accurate or stable, and is limited in the number of channels, often with channel crosstalk, and intermittent as the shaft rotates. Non-real-time digital communication is very popular, but complex, with long development time, and objections from users who need continuous waveforms from many channels. This innovation extends the amount of information conveyed from a rotating machine to a data acquisition system while keeping the development time short and keeping the rotating electronics simple, compact, stable, and rugged. The data are all real time. The product of the number of channels, times the bit resolution, times the update rate, gives a data rate higher than available by older methods. The telemetry system consists of a data-receiving rack that supplies magnetically coupled power to a rotating instrument amplifier ring in the machine being monitored. The ring digitizes the data and magnetically couples the data back to the rack, where it is made available. The transformer is generally a ring positioned around the axis of rotation with one side of the transformer free to rotate and the other side held stationary. The windings are laid in the ring; this gives the data immunity to any rotation that may occur. A medium-frequency sine-wave power source in a rack supplies power through a cable to a rotating ring transformer that passes the power on to a rotating set of electronics. The electronics power a set of up to 40 sensors and provides instrument amplifiers for the sensors. The outputs from the amplifiers are filtered and multiplexed into a serial ADC. The output from the ADC is connected to another rotating ring transformer that conveys the serial data from the rotating section to

  6. High Performance Pulse Tube Cryocoolers

    NASA Astrophysics Data System (ADS)

    Olson, J. R.; Roth, E.; Champagne, P.; Evtimov, B.; Nast, T. C.

    2008-03-01

    Lockheed Martin's Advanced Technology Center has been developing pulse tube cryocoolers for more than ten years. Recent innovations include successful testing of four-stage coldheads, no-load temperature below 4 K, and the recent development of a high-efficiency compressor. This paper discusses the predicted performance of single and multiple stage pulse tube coldheads driven by our new 6 kg "M5Midi" compressor, which is capable of 90% efficiency with 200 W input power, and a maximum input power of 1000 W. This compressor retains the simplicity of earlier LM-ATC compressors: it has a moving magnet and an external electrical coil, minimizing organics in the working gas and requiring no electrical penetrations through the pressure wall. Motor losses were minimized during design, resulting in a simple, easily-manufactured compressor with state-of-the-art motor efficiency. The predicted cryocooler performance is presented as simple formulae, allowing an engineer to include the impact of a highly-optimized cryocooler into a full system analysis. Performance is given as a function of the heat rejection temperature and the cold tip temperatures and cooling loads.

  7. High Performance Field Reversed Configurations

    NASA Astrophysics Data System (ADS)

    Binderbauer, Michl

    2014-10-01

    The field-reversed configuration (FRC) is a prolate compact toroid with poloidal magnetic fields. FRCs could lead to economic fusion reactors with high power density, simple geometry, natural divertor, ease of translation, and possibly capable of burning aneutronic fuels. However, as in other high-beta plasmas, there are stability and confinement concerns. These concerns can be addressed by introducing and maintaining a significant fast ion population in the system. This is the approach adopted by TAE and implemented for the first time in the C-2 device. Studying the physics of FRCs driven by Neutral Beam (NB) injection, significant improvements were made in confinement and stability. Early C-2 discharges had relatively good confinement, but global power losses exceeded the available NB input power. The addition of axially streaming plasma guns, magnetic end plugs as well as advanced surface conditioning leads to dramatic reductions in turbulence driven losses and greatly improved stability. As a result, fast ion confinement significantly improved and allowed for build-up of a dominant fast particle population. Under such appropriate conditions we achieved highly reproducible, long-lived, macroscopically stable FRCs with record lifetimes. This demonstrated many beneficial effects of large orbit particles and their performance impact on FRCs Together these achievements point to the prospect of beam-driven FRCs as a path toward fusion reactors. This presentation will review and expand on key results and present context for their interpretation.

  8. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  9. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  10. High performance computing and communications program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee

    1992-01-01

    A review of the High Performance Computing and Communications (HPCC) program is provided in vugraph format. The goals and objectives of this federal program are as follows: extend U.S. leadership in high performance computing and computer communications; disseminate the technologies to speed innovation and to serve national goals; and spur gains in industrial competitiveness by making high performance computing integral to design and production.

  11. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  12. Method of making a high performance ultracapacitor

    SciTech Connect

    Farahmandi, C.J.; Dispennette, J.M.

    2000-05-09

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  13. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  14. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  15. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  16. High Performance Work Practices and Firm Performance.

    ERIC Educational Resources Information Center

    Department of Labor, Washington, DC. Office of the American Workplace.

    A literature survey established that a substantial amount of research has been conducted on the relationship between productivity and the following specific high performance work practices: employee involvement in decision making, compensation linked to firm or worker performance, and training. According to these studies, high performance work…

  17. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  18. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    SciTech Connect

    Not Available

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  19. High performance thermal imaging for the 21st century

    NASA Astrophysics Data System (ADS)

    Clarke, David J.; Knowles, Peter

    2003-01-01

    In recent years IR detector technology has developed from early short linear arrays. Such devices require high performance signal processing electronics to meet today's thermal imaging requirements for military and para-military applications. This paper describes BAE SYSTEMS Avionics Group's Sensor Integrated Modular Architecture thermal imager which has been developed alongside the group's Eagle 640×512 arrays to provide high performance imaging capability. The electronics architecture also supprots High Definition TV format 2D arrays for future growth capability.

  20. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  1. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  2. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  3. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  4. High Performance Computing in Solid Earth Sciences

    NASA Astrophysics Data System (ADS)

    Manea, V. C.; Manea, M.; Pomeran, M.; Besutiu, L.; Zlagnean, L.

    2012-04-01

    Presently, the solid earth sciences started to move towards implementing high performance computational (HPC) research facilities. One of the key tenants of HPC is performance, and designing a HPC solution tailored to a specific research field as solid earth that represents an optimum price/performance ratio is often a challenge. The HPC system performance strongly depends on the software-hardware interaction, and therefore prior knowledge on how well specific parallelized software performs on different HPC architectures can weight significantly on choosing the final configuration. In this paper we present benchmark results from two different HPC systems: one low-end HPCC (Horus) with 300 cores and 1.6 TFlops theoretical peak performance, and one high-end HPCC (CyberDyn) with 1344 cores and 11.2 TFlops theoretical peak performance. The software benchmark used in this paper is the open source package CitcomS, which is widely used in the solid earth community (www.geodynamics.org). Testing a CFD code specific for earth sciences, the HPC system Horus based on Gigabit Ethernet performed remarkably well compared with its counterpart Cyeberdyn which is based on Infiniband QDR fabric, but only for a relatively small number of computing cores (96). However, increasing the mesh size and the number of computing cores the HPCC CyberDyn starts outperforming the HPCC Horus because of the low-latency high-speed QDR network dedicated to MPI traffic. Since presently we are moving towards high-resolution simulations for geodynamic predictions that require the same scale as observations, HPC facilities used in earth sciences should benefit from larger up-front investment in future systems that are based on high-speed interconnects.

  5. Micromagnetics on high-performance workstation and mobile computational platforms

    NASA Astrophysics Data System (ADS)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  6. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  7. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  8. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  9. High performance hand-held gas chromatograph

    SciTech Connect

    Yu, C.M.

    1998-04-28

    The Microtechnology Center of Lawrence Livermore National Laboratory has developed a high performance hand-held, real time detection gas chromatograph (HHGC) by Micro-Electro-Mechanical-System (MEMS) technology. The total weight of this hand-held gas chromatograph is about five lbs., with a physical size of 8{close_quotes} x 5{close_quotes} x 3{close_quotes} including carrier gas and battery. It consumes about 12 watts of electrical power with a response time on the order of one to two minutes. This HHGC has an average effective theoretical plate of about 40k. Presently, its sensitivity is limited by its thermal sensitive detector at PPM. Like a conventional G.C., this HHGC consists mainly of three major components: (1) the sample injector, (2) the column, and (3) the detector with related electronics. The present HHGC injector is a modified version of the conventional injector. Its separation column is fabricated completely on silicon wafers by means of MEMS technology. This separation column has a circular cross section with a diameter of 100 pm. The detector developed for this hand-held GC is a thermal conductivity detector fabricated on a silicon nitride window by MEMS technology. A normal Wheatstone bridge is used. The signal is fed into a PC and displayed through LabView software.

  10. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  11. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  12. Multichannel Detection in High-Performance Liquid Chromatography.

    ERIC Educational Resources Information Center

    Miller, James C.; And Others

    1982-01-01

    A linear photodiode array is used as the photodetector element in a new ultraviolet-visible detection system for high-performance liquid chromatography (HPLC). Using a computer network, the system processes eight different chromatographic signals simultaneously in real-time and acquires spectra manually/automatically. Applications in fast HPLC…

  13. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  14. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  15. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  16. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  17. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  18. Experience with high-performance PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Goldburgh, Mitchell M.; Head, Calvin

    1997-05-01

    Lockheed Martin (Loral) has installed PACS with associated teleradiology in several tens of hospitals. The PACS that have been installed have been the basis for a shift to filmless radiology in many of the hospitals. the basic structure for the PACS and the teleradiology that is being used is outlined. The way that the PACS are being used in the hospitals is instructive. The three most used areas for radiology in the hospital are the wards including the ICU wards, the emergency room, and the orthopedics clinic. The examinations are mostly CR images with 20 percent to 30 percent of the examinations being CT, MR, and ultrasound exams. The PACS are being used to realize improved productivity for radiology and for the clinicians. For radiology the same staff is being used for 30 to 50 percent more workload. For the clinicians 10 to 20 percent of their time is being saved in dealing with radiology images. The improved productivity stems from the high performance of the PACS that has been designed and installed. Images are available on any workstation in the hospital within less than two seconds, even during the busiest hour of the day. The examination management functions to restrict the attention of any one user to the examinations that are of interest. The examination management organizes the workflow through the radiology department and the hospital, improving the service of the radiology department by reducing the time until the information from a radiology examination is available. The remaining weak link in the PACS system is transcription. The examination can be acquired, read, an the report dictated in much less than ten minutes. The transcription of the dictated reports can take from a few hours to a few days. The addition of automatic transcription services will remove this weak link.

  19. Separation, concentration and determination of chloramphenicol in environment and food using an ionic liquid/salt aqueous two-phase flotation system coupled with high-performance liquid chromatography.

    PubMed

    Han, Juan; Wang, Yun; Yu, Cuilan; Li, Chunxiang; Yan, Yongsheng; Liu, Yan; Wang, Liang

    2011-01-31

    Ionic liquid-salt aqueous two-phase flotation (ILATPF) is a novel, green, non-toxic and sensitive samples pretreatment technique. ILATPF coupled with high-performance liquid chromatography (HPLC) was developed for the analysis of chloramphenicol, which combines ionic liquid aqueous two-phase system (ILATPS) based on imidazolium ionic liquid (1-butyl-3-methylimidazolium chloride, [C(4)mim]Cl) and inorganic salt (K(2)HPO(4)) with solvent sublation. In ILATPF systems, phase behaviors of the ILATPF were studied for different types of ionic liquids and salts. The sublation efficiency of chloramphenicol in [C(4)mim]Cl-K(2)HPO(4) ILATPF was influenced by the types of salts, concentration of K(2)HPO(4) in aqueous solution, solution pH, nitrogen flow rate, sublation time and the amount of [C(4)mim]Cl. Under the optimum conditions, the average sublation efficiency is up to 98.5%. The mechanism of ILATPF contains two principal processes. One is the mechanism of IL-salt ILATPS formation, the other is solvent sublation. This method was practical when applied to the analysis of chloramphenicol in lake water, feed water, milk, and honey samples with the linear range of 0.5-500 ng mL(-1). The method yielded limit of detection (LOD) of 0.1 ng mL(-1) and limit of quantification (LOQ) of 0.3 ng mL(-1). The recovery of CAP was 97.1-101.9% from aqueous samples of environmental and food samples by the proposed method. Compared with liquid-liquid extraction, solvent sublation and ionic liquid aqueous two-phase extraction, ILATPF can not only separate and concentrate chloramphenicol with high sublation efficiency, but also efficiently reduce the wastage of IL. This novel technique is much simpler and more environmentally friendly and is suggested to have important applications for the concentration and separation of other small biomolecules. PMID:21168562

  20. Resource Estimation in High Performance Medical Image Computing

    PubMed Central

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D.M.

    2015-01-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of ‘jobs’ requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  1. High Performance Computing and Communications Panel Report.

    ERIC Educational Resources Information Center

    President's Council of Advisors on Science and Technology, Washington, DC.

    This report offers advice on the strengths and weaknesses of the High Performance Computing and Communications (HPCC) initiative, one of five presidential initiatives launched in 1992 and coordinated by the Federal Coordinating Council for Science, Engineering, and Technology. The HPCC program has the following objectives: (1) to extend U.S.…

  2. Co-design for high performance computing.

    SciTech Connect

    Dosanjh, Sudip Singh; Hemmert, Karl Scott; Rodrigues, Arun F.

    2010-07-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  3. High Poverty, High Performing Schools. IDRA Focus.

    ERIC Educational Resources Information Center

    IDRA Newsletter, 1997

    1997-01-01

    This theme issue includes four articles on high performance by poor Texas schools. In "Principal of National Blue Ribbon School Says High Poverty Schools Can Excel" (interview with Robert Zarate by Christie L. Goodman), the principal of Mary Hull Elementary School (San Antonio, Texas) describes how the high-poverty, high-minority school…

  4. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  5. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  6. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  7. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  8. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  9. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  10. 24 CFR 902.71 - Incentives for high performers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... that remain in effect, such as those for competitive bidding or competitive negotiation (see 24 CFR 85... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Incentives for high performers. 902... DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM PHAS Incentives and Remedies § 902.71 Incentives for...

  11. National Best Practices Manual for Building High Performance Schools

    ERIC Educational Resources Information Center

    US Department of Energy, 2007

    2007-01-01

    The U.S. Department of Energy's Rebuild America EnergySmart Schools program provides school boards, administrators, and design staff with guidance to help make informed decisions about energy and environmental issues important to school systems and communities. "The National Best Practices Manual for Building High Performance Schools" is a part of…

  12. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  13. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Diagnostic and remedial methods concerning rotordynamic instability problems in high performance turbomachinery are discussed. Instabilities due to seal forces and work-fluid forces are identified along with those induced by rotor bearing systems. Several methods of rotordynamic control are described including active feedback methods, the use of elastometric elements, and the use of hydrodynamic journal bearings and supports.

  14. The Process Guidelines for High-Performance Buildings

    SciTech Connect

    Grondzik, W.

    1999-07-01

    The Process Guidelines for High-Performance Buildings are a set of recommendations for the design and operation of efficient and effective commercial/institutional buildings. The Process Guidelines have been developed in a searchable database format and are intended to replace print documents that provide guidance for new building designs for the State of Florida and for the operation of existing State buildings. The Process Guidelines for High-Performance buildings reside on the World Wide Web and are publicly accessible. Contents may be accessed in a variety of ways to best suit the needs of the user. The Process Guidelines address the interests of a range of facilities professionals; are organized around the primary phases of building design, construction, and operation; and include content dealing with all major building systems. The Process Guidelines for High-Performance Buildings may be accessed through the ``Resources'' area of the edesign Web site: http://fcn.state.fl.us/fdi/edesign/resource/index.html.

  15. Extraction and determination of chloramphenicol in feed water, milk, and honey samples using an ionic liquid/sodium citrate aqueous two-phase system coupled with high-performance liquid chromatography.

    PubMed

    Han, Juan; Wang, Yun; Yu, Cui-lan; Yan, Yong-sheng; Xie, Xue-qiao

    2011-01-01

    A green, simple, non-toxic, and sensitive sample pretreatment procedure coupled with high-performance liquid chromatography (HPLC) was developed for the analysis of chloramphenicol (CAP) that exploits an aqueous two-phase system based on imidazolium ionic liquid (1-butyl-3-methylimidazolium tetrafluoroborate, [Bmim]BF(4)) and organic salt (Na(3)C(6)H(5)O(7)) using a liquid-liquid extraction technique. The influence factors on partition behaviors of CAP were studied, including the type and amount of salts, the pH value, the volume of [Bmim]BF(4), and the extraction temperature. Extraction efficiency of the CAP was found to increase with increasing temperature and the volume of [Bmim]BF(4). Thermodynamic studies indicated that hydrophobic interactions were the main driving force, although electrostatic interactions and salting-out effects were also important for the transfer of the CAP. Under the optimal conditions, 90.1% of the CAP could be extracted into the ionic liquid-rich phase in a single-step extraction. This method was practical when applied to the analysis of CAP in feed water, milk, and honey samples with a linear range of 2~1,000 ng mL(-1). The method yielded a limit of detection of 0.3 ng mL(-1) and a limit of quantification of 1.0 ng mL(-1). The recovery of CAP was 90.4-102.7% from aqueous samples of real feed water, milk, and honey samples by the proposed method. This novel process is much simpler and more environmentally friendly and is suggested to have important applications for the separation of antibiotics. PMID:21063686

  16. Single-step electrotransfer of reverse-stained proteins from sodium dodecyl sulfate-polyacrylamide gel onto reversed-phase minicartridge and subsequent desalting and elution with a conventional high-performance liquid chromatography gradient system for analysis.

    PubMed

    Fernandez-Patron, C; Madrazo, J; Hardy, E; Mendez, E; Frank, R; Castellanos-Serra, L

    1995-06-01

    Isolation of proteins from polyacrylamide electrophoresis gels by a novel combination of techniques is described. A given protein band from a reverse stained (imidazol-sodium dodecyl sulfate--zinc salts) gel can be directly electrotransferred onto a reversed-phase chromatographic support, packed in a self-made minicartridge (2 mm in thickness, 8 mm in internal diameter, made of inert polymeric materials). The minicartridge is then connected to a high-performance liquid chromatography system and the electrotransferred protein eluted by applying an acetonitrile gradient. Proteins elute in a small volume ( < 700 microL) of high-purity volatile solvents (water, trifluoroacetic acid, acetonitrile) and are free of contaminants (gel contaminants, salts, etc). Electrotransferred proteins were efficiently retained, e.g., up to 90% for radioiodinated alpha-lactalbumin, by the octadecyl matrix, and their recovery on elution from the minicartridge was in the range typical for this type of chromatographic support, e.g., 73% for alpha-lactalbumin. The technique was successfully applied to a variety of proteins in the molecular mass range 6-68 kDa, and with amounts between 50 and 2000 pmol. The good mechanical and chemical stability of the developed minicartridges, during electrotransfer and chromatography, allowed their repeated use. This new technique permitted a single-step separation of two proteins unresolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis due to their different elution from the reversed-phase support. The isolated proteins were amenable to analysis by N-terminal sequencing, enzymic digestion and mass spectrometry of their proteolytic fragments. Chromatographic elution of proteins from the reversed-phase mini-cartridge was apparently independent of the specific loading mode employed, i.e., loading by conventional loop injection or by electrotransfer. PMID:7498136

  17. High Performance Diesel Fueled Cabin Heater

    SciTech Connect

    Butcher, Tom

    2001-08-05

    Recent DOE-OHVT studies show that diesel emissions and fuel consumption can be greatly reduced at truck stops by switching from engine idle to auxiliary-fired heaters. Brookhaven National Laboratory (BNL) has studied high performance diesel burner designs that address the shortcomings of current low fire-rate burners. Initial test results suggest a real opportunity for the development of a truly advanced truck heating system. The BNL approach is to use a low pressure, air-atomized burner derived form burner designs used commonly in gas turbine combustors. This paper reviews the design and test results of the BNL diesel fueled cabin heater. The burner design is covered by U.S. Patent 6,102,687 and was issued to U.S. DOE on August 15, 2000.The development of several novel oil burner applications based on low-pressure air atomization is described. The atomizer used is a pre-filming, air blast nozzle of the type commonly used in gas turbine combustion. The air pressure used can b e as low as 1300 Pa and such pressure can be easily achieved with a fan. Advantages over conventional, pressure-atomized nozzles include ability to operate at low input rates without very small passages and much lower fuel pressure requirements. At very low firing rates the small passage sizes in pressure swirl nozzles lead to poor reliability and this factor has practically constrained these burners to firing rates over 14 kW. Air atomization can be used very effectively at low firing rates to overcome this concern. However, many air atomizer designs require pressures that can be achieved only with a compressor, greatly complicating the burner package and increasing cost. The work described in this paper has been aimed at the practical adaptation of low-pressure air atomization to low input oil burners. The objective of this work is the development of burners that can achieve the benefits of air atomization with air pressures practically achievable with a simple burner fan.

  18. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  19. Poisson's ratio of high-performance concrete

    SciTech Connect

    Persson, B.

    1999-10-01

    This article outlines an experimental and numerical study on Poisson's ratio of high-performance concrete subjected to air or sealed curing. Eight qualities of concrete (about 100 cylinders and 900 cubes) were studied, both young and in the mature state. The concretes contained between 5 and 10% silica fume, and two concretes in addition contained air-entrainment. Parallel studies of strength and internal relative humidity were carried out. The results indicate that Poisson's ratio of high-performance concrete is slightly smaller than that of normal-strength concrete. Analyses of the influence of maturity, type of aggregate, and moisture on Poisson's ratio are also presented. The project was carried out from 1991 to 1998.

  20. Scientific data storage solutions: Meeting the high-performance challenge

    SciTech Connect

    Krantz, D.; Jones, L.; Kluegel, L.; Ramsey, C.; Collins, W.

    1994-04-01

    The Los Alamos High-Performance Data System (HPDS) has been developed to meet data storage and data access requirements of Grand Challenge and National Security problems running in a high-performance computing environment. HPDS is a fourth-generation data storage system in which storage devices are directly connected to a network, data is transferred directly between client machines and storage devices, and software distributed on workstations provides system management and control capabilities. Essential to the success of HPDS is the ability to effectively use HIPPI networks and HIPPI-attached storage devices for high-speed data transfer. This paper focuses on the performance of the HPDS storage systems in a Cray Supercomputer environment.

  1. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  2. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  3. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  4. Implementing High Performance Remote Method Invocation in CCA

    SciTech Connect

    Yin, Jian; Agarwal, Khushbu; Krishnan, Manoj Kumar; Chavarría-Miranda, Daniel; Gorton, Ian; Epperly, Thomas G.

    2011-09-30

    We report our effort in engineering a high performance remote method invocation (RMI) mechanism for the Common Component Architecture (CCA). This mechanism provides a highly efficient and easy-to-use mechanism for distributed computing in CCA, enabling CCA applications to effectively leverage parallel systems to accelerate computations. This work is built on the previous work of Babel RMI. Babel is a high performance language interoperability tool that is used in CCA for scientific application writers to share, reuse, and compose applications from software components written in different programming languages. Babel provides a transparent and flexible RMI framework for distributed computing. However, the existing Babel RMI implementation is built on top of TCP and does not provide the level of performance required to distribute fine-grained tasks. We observed that the main reason the TCP based RMI does not perform well is because it does not utilize the high performance interconnect hardware on a cluster efficiently. We have implemented a high performance RMI protocol, HPCRMI. HPCRMI achieves low latency by building on top of a low-level portable communication library, Aggregated Remote Message Copy Interface (ARMCI), and minimizing communication for each RMI call. Our design allows a RMI operation to be completed by only two RDMA operations. We also aggressively optimize our system to reduce copying. In this paper, we discuss the design and our experimental evaluation of this protocol. Our experimental results show that our protocol can improve RMI performance by an order of magnitude.

  5. Department of Energy research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  6. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  7. Micro-polarimeter for high performance liquid chromatography

    DOEpatents

    Yeung, Edward E.; Steenhoek, Larry E.; Woodruff, Steven D.; Kuo, Jeng-Chung

    1985-01-01

    A micro-polarimeter interfaced with a system for high performance liquid chromatography, for quantitatively analyzing micro and trace amounts of optically active organic molecules, particularly carbohydrates. A flow cell with a narrow bore is connected to a high performance liquid chromatography system. Thin, low birefringence cell windows cover opposite ends of the bore. A focused and polarized laser beam is directed along the longitudinal axis of the bore as an eluent containing the organic molecules is pumped through the cell. The beam is modulated by air gap Faraday rotators for phase sensitive detection to enhance the signal to noise ratio. An analyzer records the beams's direction of polarization after it passes through the cell. Calibration of the liquid chromatography system allows determination of the quantity of organic molecules present from a determination of the degree to which the polarized beam is rotated when it passes through the eluent.

  8. Failure analysis of high performance ballistic fibers

    NASA Astrophysics Data System (ADS)

    Spatola, Jennifer S.

    High performance fibers have a high tensile strength and modulus, good wear resistance, and a low density, making them ideal for applications in ballistic impact resistance, such as body armor. However, the observed ballistic performance of these fibers is much lower than the predicted values. Since the predictions assume only tensile stress failure, it is safe to assume that the stress state is affecting fiber performance. The purpose of this research was to determine if there are failure mode changes in the fiber fracture when transversely loaded by indenters of different shapes. An experimental design mimicking transverse impact was used to determine any such effects. Three different indenters were used: round, FSP, and razor blade. The indenter height was changed to change the angle of failure tested. Five high performance fibers were examined: KevlarRTM KM2, SpectraRTM 130d, DyneemaRTM SK-62 and SK-76, and ZylonRTM 555. Failed fibers were analyzed using an SEM to determine failure mechanisms. The results show that the round and razor blade indenters produced a constant failure strain, as well as failure mechanisms independent of testing angle. The FSP indenter produced a decrease in failure strain as the angle increased. Fibrillation was the dominant failure mechanism at all angles for the round indenter, while through thickness shearing was the failure mechanism for the razor blade. The FSP indenter showed a transition from fibrillation at low angles to through thickness shearing at high angles, indicating that the round and razor blade indenters are extreme cases of the FSP indenter. The failure mechanisms observed with the FSP indenter at various angles correlated with the experimental strain data obtained during fiber testing. This indicates that geometry of the indenter tip in compression is a contributing factor in lowering the failure strain of the high performance fibers. TEM analysis of the fiber failure mechanisms was also attempted, though without

  9. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  10. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  11. High performance forward swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Koenig, David G. (Inventor); Aoyagi, Kiyoshi (Inventor); Dudley, Michael R. (Inventor); Schmidt, Susan B. (Inventor)

    1988-01-01

    A high performance aircraft capable of subsonic, transonic and supersonic speeds employs a forward swept wing planform and at least one first and second solution ejector located on the inboard section of the wing. A high degree of flow control on the inboard sections of the wing is achieved along with improved maneuverability and control of pitch, roll and yaw. Lift loss is delayed to higher angles of attack than in conventional aircraft. In one embodiment the ejectors may be advantageously positioned spanwise on the wing while the ductwork is kept to a minimum.

  12. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  13. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  14. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  15. Portability Support for High Performance Computing

    NASA Technical Reports Server (NTRS)

    Cheng, Doreen Y.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    While a large number of tools have been developed to support application portability, high performance application developers often prefer to use vendor-provided, non-portable programming interfaces. This phenomena indicates the mismatch between user priorities and tool capabilities. This paper summarizes the results of a user survey and a developer survey. The user survey has revealed the user priorities and resulted in three criteria for evaluating tool support for portability. The developer survey has resulted in the evaluation of portability support and indicated the possibilities and difficulties of improvements.

  16. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  17. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  18. Multijunction Photovoltaic Technologies for High-Performance Concentrators

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2006-01-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  19. Multijunction Photovoltaic Technologies for High-Performance Concentrators: Preprint

    SciTech Connect

    McConnell, R.; Symko-Davies, M.

    2006-05-01

    Multijunction solar cells provide high-performance technology pathways leading to potentially low-cost electricity generated from concentrated sunlight. The National Center for Photovoltaics at the National Renewable Energy Laboratory has funded different III-V multijunction solar cell technologies and various solar concentration approaches. Within this group of projects, III-V solar cell efficiencies of 41% are close at hand and will likely be reported in these conference proceedings. Companies with well-developed solar concentrator structures foresee installed system costs of $3/watt--half of today's costs--within the next 2 to 5 years as these high-efficiency photovoltaic technologies are incorporated into their concentrator photovoltaic systems. These technology improvements are timely as new large-scale multi-megawatt markets, appropriate for high performance PV concentrators, open around the world.

  20. Design concepts to improve high performance solar simulator

    NASA Technical Reports Server (NTRS)

    Juranek, H. J.; Frey, H. U.

    1986-01-01

    By improving several important components of the well known off-axis solar simulator system, a considerable step forward was made. The careful mathematical studies on the optics and the thermal side of the problem lead to a highly efficient system with low operational costs and a high reliability. The actual performance of the simulator is significantly better than the specified one, and the efficiency is outstanding. No more than 12 lamps operating at 18 kW are required to obtain one Solar Constant in the 6 m beam. It is now known that by using sophisticated optics, even larger facilities of high performance can be designed without leaving the proven off-axis concept and using a spherical mirror. Using high performance optics is a means of reducing costs at a given size of beam because the number of lamps is one of the most cost driving factors in the construction of a solar simulator.

  1. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  2. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  3. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  4. Stability and control of maneuvering high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Stengel, R. F.; Berry, P. W.

    1977-01-01

    The stability and control of a high-performance aircraft was analyzed, and a design methodology for a departure prevention stability augmentation system (DPSAS) was developed. A general linear aircraft model was derived which includes maneuvering flight effects and trim calculation procedures for investigating highly dynamic trajectories. The stability and control analysis systematically explored the effects of flight condition and angular motion, as well as the stability of typical air combat trajectories. The effects of configuration variation also were examined.

  5. Inorganic nanostructured materials for high performance electrochemical supercapacitors

    NASA Astrophysics Data System (ADS)

    Liu, Sheng; Sun, Shouheng; You, Xiao-Zeng

    2014-01-01

    Electrochemical supercapacitors (ES) are a well-known energy storage system that has high power density, long life-cycle and fast charge-discharge kinetics. Nanostructured materials are a new generation of electrode materials with large surface area and short transport/diffusion path for ions and electrons to achieve high specific capacitance in ES. This mini review highlights recent developments of inorganic nanostructure materials, including carbon nanomaterials, metal oxide nanoparticles, and metal oxide nanowires/nanotubes, for high performance ES applications.

  6. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  7. High performance stepper motors for space mechanisms

    NASA Astrophysics Data System (ADS)

    Sega, Patrick; Estevenon, Christine

    1995-05-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  8. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  9. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  10. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  11. High performance railgun barrels for laboratory use

    NASA Astrophysics Data System (ADS)

    Bauer, David P.; Newman, Duane C.

    1993-01-01

    High performance low-cost, laboratory railgun barrels are now available, comprised of an inherently stiff containment structure which surrounds the bore components machined from 'off the-shelf' materials. The shape of the containment structure was selected to make the barrel inherently stiff. The structure consists of stainless steel laminations which do not compromise the electrical efficiency of the railgun. The modular design enhances the utility of the barrel, as it is easy to service between shots, and can be 're-cored' to produce different configurations and sizes using the same structure. We have produced barrels ranging from 15 mm to 90 mm square bore, a 30 mm round bore, and in lengths varying from 0.25 meters to 10 meters long. Successful tests with both plasma and solid metal armatures have demonstrated the versatility and performance of this design.

  12. Development of a high performance peristaltic micropump

    NASA Astrophysics Data System (ADS)

    Pham, My; Goo, Nam Seo

    2008-03-01

    In this study, a high performance peristaltic micropump has been developed and investigated. The micropump has three cylinder chambers which are connected through micro-channels for high pumping pressure performance. A circular-shaped mini LIPCA has been designed and manufactured for actuating diaphragm. In this LIPCA, a 0.1mm thickness PZT ceramic is used as an active layer. As a result, the actuator has shown to produce large out of plane deflection and consumed low power. During the design process, a coupled field analysis was conducted to predict the actuating behavior of a diaphragm and pumping performance. MEMS technique was used to fabricate the peristaltic micropump. Pumping performance of the present micropump was investigated both numerically and experimentally. The present peristaltic micropump was shown to have higher performance than the same kind of micropump developed else where.

  13. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  14. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  15. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  16. High Performance Descriptive Semantic Analysis of Semantic Graph Databases

    SciTech Connect

    Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan; Feo, John T.; Haglin, David J.; Mackey, Greg E.; Mizell, David W.

    2011-06-02

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.

  17. Some design considerations for high-performance infrared imaging seeker

    NASA Astrophysics Data System (ADS)

    Fan, Jinxiang; Huang, Jianxiong

    2015-10-01

    In recent years, precision guided weapons play more and more important role in modern war. The development and applications of infrared imaging guidance technology have been paid more and more attention. And with the increasing of the complexity of mission and environment, precision guided weapons make stricter demand for infrared imaging seeker. The demands for infrared imaging seeker include: high detection sensitivity, large dynamic range, having better target recognition capability, having better anti-jamming capability and better environment adaptability. To meet the strict demand of weapon system, several important issues should be considered in high-performance infrared imaging seeker design. The mission, targets, environment of infrared imaging guided missile must be regarded. The tradeoff among performance goal, design parameters, infrared technology constraints and missile constraints should be considered. The optimized application of IRFPA and ATR in complicated environment should be concerned. In this paper, some design considerations for high-performance infrared imaging seeker were discussed.

  18. Building and measuring a high performance network architecture

    SciTech Connect

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck; Dugan, Jon; Wheeler, David; Wing, William R; Nickless, William; Goddard, Gregory; Corbato, Steven; Love, E. Paul; Daspit, Paul; Edwards, Hal; Mercer, Linden; Koester, David; Decina, Basil; Dart, Eli; Paul Reisinger, Paul; Kurihara, Riki; Zekauskas, Matthew J; Plesset, Eric; Wulf, Julie; Luce, Douglas; Rogers, James; Duncan, Rex; Mauth, Jeffery

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning. The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.

  19. On implementing MPI-IO portably and with high performance.

    SciTech Connect

    Thakur, R.; Gropp, W.; Lusk, E.

    1998-11-30

    We discuss the issues involved in implementing MPI-IO portably on multiple machines and file systems and also achieving high performance. One way to implement MPI-IO portably is to implement it on top of the basic Unix I/O functions (open, seek, read, write, and close), which are themselves portable. We argue that this approach has limitations in both functionality and performance. We instead advocate an implementation approach that combines a large portion of portable code and a small portion of code that is optimized separately for different machines and file systems. We have used such an approach to develop a high-performance, portable MPI-IO implementation, called ROMIO. In addition to basic I/O functionality, we consider the issues of supporting other MPI-IO features, such as 64-bit file sizes, noncontiguous accesses, collective I/O, asynchronous I/O, consistency and atomicity semantics, user-supplied hints, shared file pointers, portable data representation, file preallocation, and some miscellaneous features. We describe how we implemented each of these features on various machines and file systems. The machines we consider are the HP Exemplar, IBM SP, Intel Paragon, NEC SX-4, SGI Origin2000, and networks of workstations; and the file systems we consider are HP HFS, IBM PIOFS, Intel PFS, NEC SFS, SGI XFS, NFS, and any general Unix file system (UFS). We also present our thoughts on how a file system can be designed to better support MPI-IO. We provide a list of features desired from a file system that would help in implementing MPI-IO correctly and with high performance.

  20. High performance APCS conceptual design and evaluation scoping study

    SciTech Connect

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.