Science.gov

Sample records for high-performance parallel coupler

  1. Cpl6: The New Extensible, High-Performance Parallel Coupler forthe Community Climate System Model

    SciTech Connect

    Craig, Anthony P.; Jacob, Robert L.; Kauffman, Brain; Bettge,Tom; Larson, Jay; Ong, Everest; Ding, Chris; He, Yun

    2005-03-24

    Coupled climate models are large, multiphysics applications designed to simulate the Earth's climate and predict the response of the climate to any changes in the forcing or boundary conditions. The Community Climate System Model (CCSM) is a widely used state-of-art climate model that has released several versions to the climate community over the past ten years. Like many climate models, CCSM employs a coupler, a functional unit that coordinates the exchange of data between parts of climate system such as the atmosphere and ocean. This paper describes the new coupler, cpl6, contained in the latest version of CCSM,CCSM3. Cpl6 introduces distributed-memory parallelism to the coupler, a class library for important coupler functions, and a standardized interface for component models. Cpl6 is implemented entirely in Fortran90 and uses Model Coupling Toolkit as the base for most of its classes. Cpl6 gives improved performance over previous versions and scales well on multiple platforms.

  2. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  3. High Performance Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek; Kaewpijit, Sinthop

    1998-01-01

    Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

  4. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to

  5. Analysis of high performance conjugate heat transfer with the OpenPALM coupler

    NASA Astrophysics Data System (ADS)

    Duchaine, Florent; Jauré, Stéphan; Poitou, Damien; Quémerais, Eric; Staffelbach, Gabriel; Morel, Thierry; Gicquel, Laurent

    2015-01-01

    In many communities such as climate science or industrial design, to solve complex coupled problems with high fidelity external coupling of legacy solvers puts a lot of pressure on the tool used for the coupling. The precision of such predictions not only largely depends on simulation resolutions and the use of huge meshes but also on high performance computing to reduce restitution times. In this context, the current work aims at studying the scalability of code coupling on high performance computing architectures for a conjugate heat transfer problem. The flow solver is a Large Eddy Simulation code that has been already ported on massively parallel architectures. The conduction solver is based on the same data structure and thus shares the flow solver scalability properties. Accurately coupling solvers on massively parallel architectures while maintaining their scalability is challenging. It requires exchanging and treating information based on two different computational grids that are partitioned differently on a different number of cores. Such transfers have to be thought to maintain code scalabilities while maintaining numerical accuracy. This raises communication and high performance computing issues: transferring data from a distributed interface to another distributed interface in a parallel way and on a very large number of processors is not straightforward and solutions are not clear. Performance tests have been carried out up to 12 288 cores on the CURIE supercomputer (TGCC/CEA). Results show a good behavior of the coupled model when increasing the number of cores thanks to the fully distributed exchange process implemented in the coupler. Advanced analyses are carried out to draw new paths for future developments for coupled simulations: i.e. optimization of the data transfer protocols through asynchronous communications or coupling-aware preprocessing of the coupled models (mesh partitioning phase).

  6. Parallel Algebraic Multigrid Methods - High Performance Preconditioners

    SciTech Connect

    Yang, U M

    2004-11-11

    The development of high performance, massively parallel computers and the increasing demands of computationally challenging applications have necessitated the development of scalable solvers and preconditioners. One of the most effective ways to achieve scalability is the use of multigrid or multilevel techniques. Algebraic multigrid (AMG) is a very efficient algorithm for solving large problems on unstructured grids. While much of it can be parallelized in a straightforward way, some components of the classical algorithm, particularly the coarsening process and some of the most efficient smoothers, are highly sequential, and require new parallel approaches. This chapter presents the basic principles of AMG and gives an overview of various parallel implementations of AMG, including descriptions of parallel coarsening schemes and smoothers, some numerical results as well as references to existing software packages.

  7. A high-performance SOI grating coupler with completely vertical emission

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-Lun; Tseng, Chih-Wei; Chen, Erik; Na, Neil

    2014-03-01

    Silicon-on-insulator grating coupler is a crucial passive device that enables the coupling of light into and out of a submicron, silicon-based photonic integrated circuit. In particular, a grating coupler with completely vertical emission is vital for interfacing surface-emitting/receiving optoelectronic devices, and can largely reduce the packaging cost and complexity due to off-normal configuration. Unfortunately, for a grating coupler with completely vertical emission, it inevitably induces the second-order diffraction that significantly degrades the coupling efficiency and enhances the backreflection. In this work, we propose and study a new concept for making a high-performance grating coupler with completely vertical emission. Following our design strategy, we numerically show that a 1.1 dB total coupling loss, a - 24.4 dB backreflection, and a 20 nm spectral linewidth can be achieved at the same time (without full optimization), when our grating coupler is butt-coupled to a standard single-mode fiber operating around 1310 nm wavelengths. Compared to the previous proposals such as slant grating, polymer wedge, and entrance mirror made by slit or chirped grating, our approach requires only CMOS process compatible elements and does not involve complex numerical methods to reach the desired performances. The design methodology and optimization procedure will also be discussed.

  8. A high-performance and cost-effective grating coupler for ultraviolet light

    NASA Astrophysics Data System (ADS)

    Zhang, Jingjing; Chen, Dingbo; Yang, Junbo; Chang, Shengli; Zhang, Hailiang; Jia, Honghui

    2015-10-01

    The ultraviolet (UV) light wavelengths, typically defined to range from10-400nm, have proven to be useful for a number of applications, such as astronomy, biology, medicine and so on. It is important for us to study on the UV and related devices. In this paper, a novel and effective grating coupler for ultraviolet light was reported, which can couple efficiently ultraviolet light from fiber to waveguide at the wavelength of 300nm. The grating coupler was based on the oxide layer of silicon surface, because ultraviolet light can be transmitted pure silicon dioxide (SiO2) with low loss. Based on Bragg condition of grating, we use FDTD method to simulate and design the grating parameters operated under TM polarization. Using our optimization design parameters (period T, incident angle θ, filling factor f and etching height h) to optimize the mode matching between the fiber and the grating region, a relatively high coupling efficiency was obtained for the fiber and waveguide interface. In our design, filling factor f=0.55, period T=280nm etching height H=110nm, incident angle θ=11° can be realized in the process of manufacture. But coupling efficiency are sensitivity to the range of period of grating and incident angle θ, which increase the difficulty of processing and experiment, the process of technology and operation need high precision. Consequently, we the coupling efficiency can be largely increased and beyond 88.5% at center wavelength of 296nm and 1dB bandwidth, in which the theory analysis and the simulation results are in good agreement and coupling efficiency is the highest for this kind of coupler reported as we known. This kind submicron-sized SiO2 waveguides that can be fabricated by mature CMOS-compatible processes are showing promise for realistic dense photonic integrated circuit (PIC) in various applications including optical communications, optical interconnects, signal processing and sensing. The gratings open the path to pure silicon dioxide

  9. High Performance Parallel Methods for Space Weather Simulations

    NASA Technical Reports Server (NTRS)

    Hunter, Paul (Technical Monitor); Gombosi, Tamas I.

    2003-01-01

    This is the final report of our NASA AISRP grant entitled 'High Performance Parallel Methods for Space Weather Simulations'. The main thrust of the proposal was to achieve significant progress towards new high-performance methods which would greatly accelerate global MHD simulations and eventually make it possible to develop first-principles based space weather simulations which run much faster than real time. We are pleased to report that with the help of this award we made major progress in this direction and developed the first parallel implicit global MHD code with adaptive mesh refinement. The main limitation of all earlier global space physics MHD codes was the explicit time stepping algorithm. Explicit time steps are limited by the Courant-Friedrichs-Lewy (CFL) condition, which essentially ensures that no information travels more than a cell size during a time step. This condition represents a non-linear penalty for highly resolved calculations, since finer grid resolution (and consequently smaller computational cells) not only results in more computational cells, but also in smaller time steps.

  10. Fundamental and HOM Coupler Design for the Superconducting Parallel-Bar Cavity

    SciTech Connect

    S.U. De Silva, J.R. Delayen,

    2011-03-01

    The superconducting parallel-bar cavity is currently being considered as a deflecting system for the Jefferson Lab 12 GeV upgrade and as a crabbing cavity for a possible LHC luminosity upgrade. Currently the designs are optimized to achieve lower surface fields within the dimensional constraints for the above applications. A detailed analysis of the fundamental input power coupler design for the parallel-bar cavity is performed considering beam loading and the effects of microphonics. For higher beam loading the damping of the HOMs is vital to reduce beam instabilities generated due to the wake fields. An analysis of threshold impedances for each application and impedances of the modes that requires damping are presented in this paper with the design of HOM couplers.

  11. A Generic Scheduling Simulator for High Performance Parallel Computers

    SciTech Connect

    Yoo, B S; Choi, G S; Jette, M A

    2001-08-01

    It is well known that efficient job scheduling plays a crucial role in achieving high system utilization in large-scale high performance computing environments. A good scheduling algorithm should schedule jobs to achieve high system utilization while satisfying various user demands in an equitable fashion. Designing such a scheduling algorithm is a non-trivial task even in a static environment. In practice, the computing environment and workload are constantly changing. There are several reasons for this. First, the computing platforms constantly evolve as the technology advances. For example, the availability of relatively powerful commodity off-the-shelf (COTS) components at steadily diminishing prices have made it feasible to construct ever larger massively parallel computers in recent years [1, 4]. Second, the workload imposed on the system also changes constantly. The rapidly increasing compute resources have provided many applications developers with the opportunity to radically alter program characteristics and take advantage of these additional resources. New developments in software technology may also trigger changes in user applications. Finally, political climate change may alter user priorities or the mission of the organization. System designers in such dynamic environments must be able to accurately forecast the effect of changes in the hardware, software, and/or policies under consideration. If the environmental changes are significant, one must also reassess scheduling algorithms. Simulation has frequently been relied upon for this analysis, because other methods such as analytical modeling or actual measurements are usually too difficult or costly. A drawback of the simulation approach, however, is that developing a simulator is a time-consuming process. Furthermore, an existing simulator cannot be easily adapted to a new environment. In this research, we attempt to develop a generic job-scheduling simulator, which facilitates the evaluation of

  12. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    SciTech Connect

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  13. Parallel 3-D Electromagnetic Particle Code Using High Performance FORTRAN: Parallel TRISTAN

    NASA Astrophysics Data System (ADS)

    Cai, D.; Li, Y.; Nishikawa, K.-I.; et al.

    A three-dimensional full electromagnetic particle-in-cell (PIC ) code, TRISTAN (Tridimensional Stanford) code, has been parallelized using High Performance Fortran (HPF) as a RPM (Real Parallel Machine). In the parallelized HPF code, the simulation domain is decomposed in one-dimension, and both the particle and field data located in each domain that we call the sub-domain are distributed on each processor. Both the particle and field data on a sub-domain are needed by the neighbor sub-domains and thus communications between the sub-domains are inevitable. Our simulation results using HPF exhibit the promising applicability of the HPF communications to a large scale scientific computing such as solar wind-magnetosphere interactions.

  14. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  15. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  16. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  17. A high performance parallel computing architecture for robust image features

    NASA Astrophysics Data System (ADS)

    Zhou, Renyan; Liu, Leibo; Wei, Shaojun

    2014-03-01

    A design of parallel architecture for image feature detection and description is proposed in this article. The major component of this architecture is a 2D cellular network composed of simple reprogrammable processors, enabling the Hessian Blob Detector and Haar Response Calculation, which are the most computing-intensive stage of the Speeded Up Robust Features (SURF) algorithm. Combining this 2D cellular network and dedicated hardware for SURF descriptors, this architecture achieves real-time image feature detection with minimal software in the host processor. A prototype FPGA implementation of the proposed architecture achieves 1318.9 GOPS general pixel processing @ 100 MHz clock and achieves up to 118 fps in VGA (640 × 480) image feature detection. The proposed architecture is stand-alone and scalable so it is easy to be migrated into VLSI implementation.

  18. High Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    In order to predict the dynamic response of a flexible structure in a fluid flow, the equations of motion of the structure and the fluid must be solved simultaneously. In this paper, we present several partitioned procedures for time-integrating this focus coupled problem and discuss their merits in terms of accuracy, stability, heterogeneous computing, I/O transfers, subcycling, and parallel processing. All theoretical results are derived for a one-dimensional piston model problem with a compressible flow, because the complete three-dimensional aeroelastic problem is difficult to analyze mathematically. However, the insight gained from the analysis of the coupled piston problem and the conclusions drawn from its numerical investigation are confirmed with the numerical simulation of the two-dimensional transient aeroelastic response of a flexible panel in a transonic nonlinear Euler flow regime.

  19. Parallel beam dynamics calculations on high performance computers

    NASA Astrophysics Data System (ADS)

    Ryne, Robert; Habib, Salman

    1997-02-01

    Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000-100,000 particles, this represents an increase of 3-4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10's of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies.

  20. High-Performance Parallel Computing. Final report, 1 February 1984-31 January 1985

    SciTech Connect

    Browne, J.C.; Lipovski, G.J.

    1986-01-22

    The 1984/85 accomplishments of the research project High-Performance Parallel Computing included bringing the prototype of the Texas Reconfigurable Array Computer (TRAC) to a configuration and to a state of stability where it could support execution of simple assembly language programs; initial development of a unified model of parallel computation which is a basis for a programming environment uniting process and data flow models of parallel computation; bringing to operational status on an alternative host one of the two parallel programming languages (the Computation Structures Language, CSL) originally intended for use on TRAC; exploration of the expressive capabilities of this programming language; initiation of development of a graphical programming language based on the unified model of parallel computation mentioned previously; major progress on a graphically interfaced Petri Net-based performance modeling system for parallel computations and development of algorithms for scheduling of circuits to realize configurations in configurable Banyan network-based computer architectures.

  1. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  2. DPR-tree: a distributed parallel spatial index structure for high performance spatial databases

    NASA Astrophysics Data System (ADS)

    Zhou, Yan; Zhu, Qing; Liu, Qiang

    2008-12-01

    Parallelism of spatial index could significantly improve the performance of spatial queries, special for massive spatial databases, so the research of parallel spatial index takes a important role in high performance spatial databases. Existing parallel spatial index methods have two main shortcoming: one is accessing hotspot and bottleneck of index items located in main server, the other is high costs and complicated operations for maintaining index consistency. Aim at these, a distributed parallel spatial index structure called DPR-tree is proposed. It splits whole index region into partition sub-regions by using Hilbert space-filling curve grid and organizes index sub-regions according to locality of spatial objects, then maps index sub-regions to partition sub-regions and assigns these index sub-regions to different computer nodes by a appointed map function, Each computer node manages a multi-level distributed sub-Rtree which is built from a index sub-region. Our experimental results indicate that the proposed parallel spatial index can achieve speedup well and offer significant potential for reducing query response time.

  3. High-performance modeling acoustic and elastic waves using the parallel Dichotomy Algorithm

    SciTech Connect

    Fatyanov, Alexey G.; Terekhov, Andrew V.

    2011-03-01

    A high-performance parallel algorithm is proposed for modeling the propagation of acoustic and elastic waves in inhomogeneous media. An initial boundary-value problem is replaced by a series of boundary-value problems for a constant elliptic operator and different right-hand sides via the integral Laguerre transform. It is proposed to solve difference equations by the conjugate gradient method for acoustic equations and by the GMRES(k) method for modeling elastic waves. A preconditioning operator was the Laplace operator that is inverted using the variable separation method. The novelty of the proposed algorithm is using the Dichotomy Algorithm , which was designed for solving a series of tridiagonal systems of linear equations, in the context of the preconditioning operator inversion. Via considering analytical solutions, it is shown that modeling wave processes for long instants of time requires high-resolution meshes. The proposed parallel fine-mesh algorithm enabled to solve real application seismic problems in acceptable time and with high accuracy. By solving model problems, it is demonstrated that the considered parallel algorithm possesses high performance and efficiency over a wide range of the number of processors (from 2 to 8192).

  4. Parallel-vector unsymmetric Eigen-Solver on high performance computers

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Jiangning, Qin

    1993-01-01

    The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.

  5. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Gumaste, U.; Ronaghi, M.

    1994-01-01

    Applications are described of high-performance parallel, computation for the analysis of complete jet engines, considering its multi-discipline coupled problem. The coupled problem involves interaction of structures with gas dynamics, heat conduction and heat transfer in aircraft engines. The methodology issues addressed include: consistent discrete formulation of coupled problems with emphasis on coupling phenomena; effect of partitioning strategies, augmentation and temporal solution procedures; sensitivity of response to problem parameters; and methods for interfacing multiscale discretizations in different single fields. The computer implementation issues addressed include: parallel treatment of coupled systems; domain decomposition and mesh partitioning strategies; data representation in object-oriented form and mapping to hardware driven representation, and tradeoff studies between partitioning schemes and fully coupled treatment.

  6. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1997-01-01

    Applications are described of high-performance computing methods to the numerical simulation of complete jet engines. The methodology focuses on the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field elements. New partitioned analysis procedures to treat this coupled three-component problem were developed. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. The NASA-sponsored ENG10 program was used for the global steady state analysis of the whole engine. This program uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 was developed as well as the capability for the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames.

  7. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Chen, P.-S.; Gumaste, U.; Leoinne, M.; Stern, P.

    1995-01-01

    This research program deals with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a by-pass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by an ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled 3-component problem were developed in 1994. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor for parallel versions of ENG10 has been developed. It is planned to use the steady-state global solution provided by ENG10 as input to a localized three-dimensional FSI analysis for engine regions where aeroelastic effects may be important.

  8. High-Performance Parallel Analysis of Coupled Problems for Aircraft Propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Park, K. C.; Gumaste, U.; Chen, P.-S.; Lesoinne, M.; Stern, P.

    1996-01-01

    This research program dealt with the application of high-performance computing methods to the numerical simulation of complete jet engines. The program was initiated in January 1993 by applying two-dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition and solution capabilities were successfully tested. Attention was then focused on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion driven by these structural displacements. The latter is treated by a ALE technique that models the fluid mesh motion as that of a fictitious mechanical network laid along the edges of near-field fluid elements. New partitioned analysis procedures to treat this coupled three-component problem were developed during 1994 and 1995. These procedures involved delayed corrections and subcycling, and have been successfully tested on several massively parallel computers, including the iPSC-860, Paragon XP/S and the IBM SP2. For the global steady-state axisymmetric analysis of a complete engine we have decided to use the NASA-sponsored ENG10 program, which uses a regular FV-multiblock-grid discretization in conjunction with circumferential averaging to include effects of blade forces, loss, combustor heat addition, blockage, bleeds and convective mixing. A load-balancing preprocessor tor parallel versions of ENG10 was developed. During 1995 and 1996 we developed the capability tor the first full 3D aeroelastic simulation of a multirow engine stage. This capability was tested on the IBM SP2 parallel supercomputer at NASA Ames. Benchmark results were presented at the 1196 Computational Aeroscience meeting.

  9. A new massively parallel version of CRYSTAL for large systems on high performance computing architectures.

    PubMed

    Orlando, Roberto; Delle Piane, Massimo; Bush, Ian J; Ugliengo, Piero; Ferrabone, Matteo; Dovesi, Roberto

    2012-10-30

    Fully ab initio treatment of complex solid systems needs computational software which is able to efficiently take advantage of the growing power of high performance computing (HPC) architectures. Recent improvements in CRYSTAL, a periodic ab initio code that uses a Gaussian basis set, allows treatment of very large unit cells for crystalline systems on HPC architectures with high parallel efficiency in terms of running time and memory requirements. The latter is a crucial point, due to the trend toward architectures relying on a very high number of cores with associated relatively low memory availability. An exhaustive performance analysis shows that density functional calculations, based on a hybrid functional, of low-symmetry systems containing up to 100,000 atomic orbitals and 8000 atoms are feasible on the most advanced HPC architectures available to European researchers today, using thousands of processors.

  10. High-performance parallel analysis of coupled problems for aircraft propulsion

    NASA Technical Reports Server (NTRS)

    Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.

    1994-01-01

    This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.

  11. Implementation of a Parallel High-Performance Visualization Technique in GRASS GIS

    SciTech Connect

    Sorokine, Alexandre

    2007-01-01

    This paper describes an extension for GRASS GIS that enables users to perform geographic visualization tasks on tiled high-resolution displays powered by the clusters of commodity personal computers. Parallel visualization systems are becoming more common in scientific computing due to the decreasing hardware costs and availability of the open source software to support such architecture. High-resolution displays allow scientists to visualize very large datasets with minimal loss of details. Such systems have a big promise especially in the field of geographic information systems because users can naturally combine several geographic scales on a single display. The paper discusses architecture, implementation and operation of pd-GRASS - a GRASS GIS extension for high-performance parallel visualization on tiled displays. pd-GRASS is specifically well suited for the very large geographic datasets such as LIDAR data or high-resolution nation-wide geographic databases. The paper also briefly touches on computational efficiency, performance and potential applications for such systems.

  12. High performance parallel computing of flows in complex geometries: II. Applications

    NASA Astrophysics Data System (ADS)

    Gourdain, N.; Gicquel, L.; Staffelbach, G.; Vermorel, O.; Duchaine, F.; Boussuge, J.-F.; Poinsot, T.

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  13. Multimedia OC12 parallel interface using VCSEL array to achieve high-performance cost-effective optical interconnections

    NASA Astrophysics Data System (ADS)

    Chang, Edward S.

    1996-09-01

    The multimedia communication needs high-performance, cost- effective communication techniques to transport data for the fast-growing multimedia traffic resulting from the recent deployment of World Wide Web (WWW), media-on-demand , and other multimedia applications. To transport a large volume, of multimedia data, high-performance servers are required to perform media processing and transfer. Typically, the high- performance multimedia server is a massively parallel processor with a high number of I/O ports, high storage capacity, fast signal processing, and excellent cost- performance. The parallel I/O ports of the server are connected to multiple clients through a network switch which uses parallel links in both switch-to-server and switch-to- client connections. In addition to media processing and storage, media communication is also a major function of the multimedia system. Without a high-performance communication network, a high-performance server can not deliver its full capacity of service to clients. Fortunately, there are many advanced communication technologies developed for networking, which can be adopted by the multimedia communication to economically deliver the full capacity of a high-performance multimedia service to clients. The VCSEL array technology has been developed for gigabit-rate parallel optical interconnections because of its high bandwidth, small-size, and easy-fabrication advantages. Several firms are developing multifiber, low-skew, low-cost ribbon cables to transfer signals form a VCSEL array. The OC12 SONET data-rate is widely used by high-performance multimedia communications for its high-data-rate and cost- effectiveness. Therefore, the OC12 VCSEL parallel optical interconnection is the ideal technology to meet the high- performance low-cost requirements for delivering affordable multimedia services to mass users. This paper describes a multimedia OC12 parallel optical interconnection using a VCSEL array transceiver, a multifiber

  14. 2-μm switchable, tunable and power-controllable dual-wavelength fiber laser based on parallel cavities using 3 × 3 coupler

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Lu, Ping; Wang, Shun; Liu, Deming; Zhang, Jiangshan

    2015-08-01

    We demonstrate a switchable, tunable and power-controllable dual-wavelength fiber laser in 2-μm region based on parallel cavities using a 3 × 3 coupler. The laser topology is based on the parallel connection of fiber Bragg gratings (FBGs) using 3 × 3 coupler which act as two individual cavities, so that the dual wavelengths are tunable and switchable by adjusting the center wavelengths of FBGs and the cavity losses, respectively. With suitable cavity losses and input pumping power, we can obtain a 2-μm switchable single- or dual-wavelength fiber laser. The proposed configuration has very good application prospects in the fields of atmospheric transmission, gas sensing, lidar and new wavelength-division-multiplexed fiber communication systems.

  15. Scalable Unix commands for parallel processors : a high-performance implementation.

    SciTech Connect

    Ong, E.; Lusk, E.; Gropp, W.

    2001-06-22

    We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.

  16. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  17. The Design and Implementation of hypre, a Library of Parallel High Performance Preconditioners

    SciTech Connect

    Falgout, R D; Jones, J E; Yang, U M

    2004-07-17

    The increasing demands of computationally challenging applications and the advance of larger more powerful computers with more complicated architectures have necessitated the development of new solvers and preconditioners. Since the implementation of these methods is quite complex, the use of high performance libraries with the newest efficient solvers and preconditioners becomes more important for promulgating their use into applications with relative ease. The hypre library [14, 17] has been designed with the primary goal of providing users with advanced scalable parallel preconditioners. Issues of robustness, ease of use, flexibility and interoperability have also been important. It can be used both as a solver package and as a framework for algorithm development. Its object model is more general and flexible than most current generation solver libraries [9]. hypre also provides several of the most commonly used solvers, such as conjugate gradient for symmetric systems or GMRES for nonsymmetric systems to be used in conjunction with the preconditioners. Design innovations have been made to enable access to the library in the way that applications users naturally think about their problems. For example, application developers that use structured grids, typically think of their problems in terms of stencils and grids. hypre's users do not have to learn complicated sparse matrix structures; instead hypre does the work of building these data structures through various conceptual interfaces. The conceptual interfaces currently implemented include stencil-based structured and semi-structured interfaces, a finite-element based unstructured interface, and a traditional linear-algebra based interface. The primary focus of this paper is on the design and implementation of the conceptual interfaces in hypre. The paper is organized as follows. The first two sections are of general interest.We begin in Section 2 with an introductory discussion of conceptual interfaces and

  18. Parallel Processing of Numerical Tsunami Simulations on a High Performance Cluster based on the GDAL Library

    NASA Astrophysics Data System (ADS)

    Schroeder, Matthias; Jankowski, Cedric; Hammitzsch, Martin; Wächter, Joachim

    2014-05-01

    Thousands of numerical tsunami simulations allow the computation of inundation and run-up along the coast for vulnerable areas over the time. A so-called Matching Scenario Database (MSDB) [1] contains this large number of simulations in text file format. In order to visualize these wave propagations the scenarios have to be reprocessed automatically. In the TRIDEC project funded by the seventh Framework Programme of the European Union a Virtual Scenario Database (VSDB) and a Matching Scenario Database (MSDB) were established amongst others by the working group of the University of Bologna (UniBo) [1]. One part of TRIDEC was the developing of a new generation of a Decision Support System (DSS) for tsunami Early Warning Systems (TEWS) [2]. A working group of the GFZ German Research Centre for Geosciences was responsible for developing the Command and Control User Interface (CCUI) as central software application which support operator activities, incident management and message disseminations. For the integration and visualization in the CCUI, the numerical tsunami simulations from MSDB must be converted into the shapefiles format. The usage of shapefiles enables a much easier integration into standard Geographic Information Systems (GIS). Since also the CCUI is based on two widely used open source products (GeoTools library and uDig), whereby the integration of shapefiles is provided by these libraries a priori. In this case, for an example area around the Western Iberian margin several thousand tsunami variations were processed. Due to the mass of data only a program-controlled process was conceivable. In order to optimize the computing efforts and operating time the use of an existing GFZ High Performance Computing Cluster (HPC) had been chosen. Thus, a geospatial software was sought after that is capable for parallel processing. The FOSS tool Geospatial Data Abstraction Library (GDAL/OGR) was used to match the coordinates with the wave heights and generates the

  19. Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi A.; Nguyen, Duc T.

    1993-01-01

    Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.

  20. Parallel-META: efficient metagenomic data analysis based on high-performance computation

    PubMed Central

    2012-01-01

    Background Metagenomics method directly sequences and analyses genome information from microbial communities. There are usually more than hundreds of genomes from different microbial species in the same community, and the main computational tasks for metagenomic data analyses include taxonomical and functional component examination of all genomes in the microbial community. Metagenomic data analysis is both data- and computation- intensive, which requires extensive computational power. Most of the current metagenomic data analysis softwares were designed to be used on a single computer or single computer clusters, which could not match with the fast increasing number of large metagenomic projects' computational requirements. Therefore, advanced computational methods and pipelines have to be developed to cope with such need for efficient analyses. Result In this paper, we proposed Parallel-META, a GPU- and multi-core-CPU-based open-source pipeline for metagenomic data analysis, which enabled the efficient and parallel analysis of multiple metagenomic datasets and the visualization of the results for multiple samples. In Parallel-META, the similarity-based database search was parallelized based on GPU computing and multi-core CPU computing optimization. Experiments have shown that Parallel-META has at least 15 times speed-up compared to traditional metagenomic data analysis method, with the same accuracy of the results http://www.computationalbioenergy.org/parallel-meta.html. Conclusion The parallel processing of current metagenomic data would be very promising: with current speed up of 15 times and above, binning would not be a very time-consuming process any more. Therefore, some deeper analysis of the metagenomic data, such as the comparison of different samples, would be feasible in the pipeline, and some of these functionalities have been included into the Parallel-META pipeline. PMID:23046922

  1. High Performance Parallel Processing Project: Industrial computing initiative. Progress reports for fiscal year 1995

    SciTech Connect

    Koniges, A.

    1996-02-09

    This project is a package of 11 individual CRADA`s plus hardware. This innovative project established a three-year multi-party collaboration that is significantly accelerating the availability of commercial massively parallel processing computing software technology to U.S. government, academic, and industrial end-users. This report contains individual presentations from nine principal investigators along with overall program information.

  2. High-Performance Computation of Distributed-Memory Parallel 3D Voronoi and Delaunay Tessellation

    SciTech Connect

    Peterka, Tom; Morozov, Dmitriy; Phillips, Carolyn

    2014-11-14

    Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization; but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared-memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the subdomains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

  3. High Performance Molecular Visualization: In-Situ and Parallel Rendering with EGL

    PubMed Central

    Stone, John E.; Messmer, Peter; Sisneros, Robert; Schulten, Klaus

    2016-01-01

    Large scale molecular dynamics simulations produce terabytes of data that is impractical to transfer to remote facilities. It is therefore necessary to perform visualization tasks in-situ as the data are generated, or by running interactive remote visualization sessions and batch analyses co-located with direct access to high performance storage systems. A significant challenge for deploying visualization software within clouds, clusters, and supercomputers involves the operating system software required to initialize and manage graphics acceleration hardware. Recently, it has become possible for applications to use the Embedded-system Graphics Library (EGL) to eliminate the requirement for windowing system software on compute nodes, thereby eliminating a significant obstacle to broader use of high performance visualization applications. We outline the potential benefits of this approach in the context of visualization applications used in the cloud, on commodity clusters, and supercomputers. We discuss the implementation of EGL support in VMD, a widely used molecular visualization application, and we outline benefits of the approach for molecular visualization tasks on petascale computers, clouds, and remote visualization servers. We then provide a brief evaluation of the use of EGL in VMD, with tests using developmental graphics drivers on conventional workstations and on Amazon EC2 G2 GPU-accelerated cloud instance types. We expect that the techniques described here will be of broad benefit to many other visualization applications. PMID:27747137

  4. HIVE-Hexagon: High-Performance, Parallelized Sequence Alignment for Next-Generation Sequencing Data Analysis

    PubMed Central

    Santana-Quintero, Luis; Dingerdissen, Hayley; Thierry-Mieg, Jean; Mazumder, Raja; Simonyan, Vahan

    2014-01-01

    Due to the size of Next-Generation Sequencing data, the computational challenge of sequence alignment has been vast. Inexact alignments can take up to 90% of total CPU time in bioinformatics pipelines. High-performance Integrated Virtual Environment (HIVE), a cloud-based environment optimized for storage and analysis of extra-large data, presents an algorithmic solution: the HIVE-hexagon DNA sequence aligner. HIVE-hexagon implements novel approaches to exploit both characteristics of sequence space and CPU, RAM and Input/Output (I/O) architecture to quickly compute accurate alignments. Key components of HIVE-hexagon include non-redundification and sorting of sequences; floating diagonals of linearized dynamic programming matrices; and consideration of cross-similarity to minimize computations. Availability https://hive.biochemistry.gwu.edu/hive/ PMID:24918764

  5. High-performance SPAD array detectors for parallel photon timing applications

    NASA Astrophysics Data System (ADS)

    Rech, I.; Cuccato, A.; Antonioli, S.; Cammi, C.; Gulinatti, A.; Ghioni, M.

    2012-02-01

    Over the past few years there has been a growing interest in monolithic arrays of single photon avalanche diodes (SPAD) for spatially resolved detection of faint ultrafast optical signals. SPADs implemented in planar technologies offer the typical advantages of microelectronic devices (small size, ruggedness, low voltage, low power, etc.). Furthermore, they have inherently higher photon detection efficiency than PMTs and are able to provide, beside sensitivities down to single-photons, very high acquisition speeds. In order to make SPAD array more and more competitive in time-resolved application it is necessary to face problems like electrical crosstalk between adjacent pixel, moreover all the singlephoton timing electronics with picosecond resolution has to be developed. In this paper we present a new instrument suitable for single-photon imaging applications and made up of 32 timeresolved parallel channels. The 32x1 pixel array that includes SPAD detectors represents the system core, and an embedded data elaboration unit performs on-board data processing for single-photon counting applications. Photontiming information is exported through a custom parallel cable that can be connected to an external multichannel TCSPC system.

  6. A parallel-vector algorithm for rapid structural analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1990-01-01

    A fast, accurate Choleski method for the solution of symmetric systems of linear equations is presented. This direct method is based on a variable-band storage scheme and takes advantage of column heights to reduce the number of operations in the Choleski factorization. The method employs parallel computation in the outermost DO-loop and vector computation via the loop unrolling technique in the innermost DO-loop. The method avoids computations with zeros outside the column heights, and as an option, zeros inside the band. The close relationship between Choleski and Gauss elimination methods is examined. The minor changes required to convert the Choleski code to a Gauss code to solve non-positive-definite symmetric systems of equations are identified. The results for two large scale structural analyses performed on supercomputers, demonstrate the accuracy and speed of the method.

  7. High-performance parallel processors based on star-coupled wavelength division multiplexing optical interconnects

    DOEpatents

    Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.

    2002-01-01

    As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.

  8. Folded waveguide coupler

    DOEpatents

    Owens, Thomas L.

    1988-03-01

    A resonant cavity waveguide coupler for ICRH of a magnetically confined plasma. The coupler consists of a series of inter-leaved metallic vanes disposed withn an enclosure analogous to a very wide, simple rectangular waveguide that has been "folded" several times. At the mouth of the coupler, a polarizing plate is provided which has coupling apertures aligned with selected folds of the waveguide through which rf waves are launched with magnetic fields of the waves aligned in parallel with the magnetic fields confining the plasma being heated to provide coupling to the fast magnetosonic wave within the plasma in the frequency usage of from about 50-200 mHz. A shorting plate terminates the back of the cavity at a distance approximately equal to one-half the guide wavelength from the mouth of the coupler to ensure that the electric field of the waves launched through the polarizing plate apertures are small while the magnetic field is near a maximum. Power is fed into the coupler folded cavity by means of an input coaxial line feed arrangement at a point which provides an impedance match between the cavity and the coaxial input line.

  9. Achieving high performance in numerical computations on RISC workstations and parallel systems

    SciTech Connect

    Goedecker, S.; Hoisie, A.

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  10. Single Fiber Star Couplers. [optical waveguides for spacecraft communication

    NASA Technical Reports Server (NTRS)

    Asawa, C. K.

    1979-01-01

    An ion exchange process was developed and used in the fabrication of state-of-the-art planar star couplers for distribution of optical radiation between optical fibers. An 8 x 8 planar transmission star coupler was packaged for evaluation purposes with sixteen fiber connectors and sixteen pigtails. Likewise a transmission star coupler and an eight-port reflection star coupler with eight-fiber ribbons rigidly attached to these couplers, and a planar coupler with silicon guides and a parallel channel guide with pigtails were also fabricated. Optical measurements of the transmission star couplers are included with a description of the manufacturing process.

  11. High-performance parallel solver for 3D time-dependent Schrodinger equation for large-scale nanosystems

    NASA Astrophysics Data System (ADS)

    Gainullin, I. K.; Sonkin, M. A.

    2015-03-01

    A parallelized three-dimensional (3D) time-dependent Schrodinger equation (TDSE) solver for one-electron systems is presented in this paper. The TDSE Solver is based on the finite-difference method (FDM) in Cartesian coordinates and uses a simple and explicit leap-frog numerical scheme. The simplicity of the numerical method provides very efficient parallelization and high performance of calculations using Graphics Processing Units (GPUs). For example, calculation of 106 time-steps on the 1000ṡ1000ṡ1000 numerical grid (109 points) takes only 16 hours on 16 Tesla M2090 GPUs. The TDSE Solver demonstrates scalability (parallel efficiency) close to 100% with some limitations on the problem size. The TDSE Solver is validated by calculation of energy eigenstates of the hydrogen atom (13.55 eV) and affinity level of H- ion (0.75 eV). The comparison with other TDSE solvers shows that a GPU-based TDSE Solver is 3 times faster for the problems of the same size and with the same cost of computational resources. The usage of a non-regular Cartesian grid or problem-specific non-Cartesian coordinates increases this benefit up to 10 times. The TDSE Solver was applied to the calculation of the resonant charge transfer (RCT) in nanosystems, including several related physical problems, such as electron capture during H+-H0 collision and electron tunneling between H- ion and thin metallic island film.

  12. High-performance partially aligned semiconductive single-walled carbon nanotube transistors achieved with a parallel technique.

    PubMed

    Wang, Yilei; Pillai, Suresh Kumar Raman; Chan-Park, Mary B

    2013-09-01

    Single-walled carbon nanotubes (SWNTs) are widely thought to be a strong contender for next-generation printed electronic transistor materials. However, large-scale solution-based parallel assembly of SWNTs to obtain high-performance transistor devices is challenging. SWNTs have anisotropic properties and, although partial alignment of the nanotubes has been theoretically predicted to achieve optimum transistor device performance, thus far no parallel solution-based technique can achieve this. Herein a novel solution-based technique, the immersion-cum-shake method, is reported to achieve partially aligned SWNT networks using semiconductive (99% enriched) SWNTs (s-SWNTs). By immersing an aminosilane-treated wafer into a solution of nanotubes placed on a rotary shaker, the repetitive flow of the nanotube solution over the wafer surface during the deposition process orients the nanotubes toward the fluid flow direction. By adjusting the nanotube concentration in the solution, the nanotube density of the partially aligned network can be controlled; linear densities ranging from 5 to 45 SWNTs/μm are observed. Through control of the linear SWNT density and channel length, the optimum SWNT-based field-effect transistor devices achieve outstanding performance metrics (with an on/off ratio of ~3.2 × 10(4) and mobility 46.5 cm(2) /Vs). Atomic force microscopy shows that the partial alignment is uniform over an area of 20 × 20 mm(2) and confirms that the orientation of the nanotubes is mostly along the fluid flow direction, with a narrow orientation scatter characterized by a full width at half maximum (FWHM) of <15° for all but the densest film, which is 35°. This parallel process is large-scale applicable and exploits the anisotropic properties of the SWNTs, presenting a viable path forward for industrial adoption of SWNTs in printed, flexible, and large-area electronics.

  13. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives. PMID:23345155

  14. Message passing interface and multithreading hybrid for parallel molecular docking of large databases on petascale high performance computing machines.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2013-04-30

    A mixed parallel scheme that combines message passing interface (MPI) and multithreading was implemented in the AutoDock Vina molecular docking program. The resulting program, named VinaLC, was tested on the petascale high performance computing (HPC) machines at Lawrence Livermore National Laboratory. To exploit the typical cluster-type supercomputers, thousands of docking calculations were dispatched by the master process to run simultaneously on thousands of slave processes, where each docking calculation takes one slave process on one node, and within the node each docking calculation runs via multithreading on multiple CPU cores and shared memory. Input and output of the program and the data handling within the program were carefully designed to deal with large databases and ultimately achieve HPC on a large number of CPU cores. Parallel performance analysis of the VinaLC program shows that the code scales up to more than 15K CPUs with a very low overhead cost of 3.94%. One million flexible compound docking calculations took only 1.4 h to finish on about 15K CPUs. The docking accuracy of VinaLC has been validated against the DUD data set by the re-docking of X-ray ligands and an enrichment study, 64.4% of the top scoring poses have RMSD values under 2.0 Å. The program has been demonstrated to have good enrichment performance on 70% of the targets in the DUD data set. An analysis of the enrichment factors calculated at various percentages of the screening database indicates VinaLC has very good early recovery of actives.

  15. DIRECTIONAL COUPLERS

    DOEpatents

    Nigg, D.J.

    1961-12-01

    A directional coupler of small size is designed. Stripline conductors of non-rectilinear configuration, and separated from each other by a thin dielectric spacer. cross each other at least at two locations at right angles, thus providing practically pure capacitive coupling which substantially eliminates undesirable inductive coupling. The conductors are sandwiched between a pair of ground planes. The coupling factor is dependent only on the thickness and dielectric constant of the dielectric spacer at the point of conductor crossover. (AEC)

  16. Using the Eclipse Parallel Tools Platform to Assist Earth Science Model Development and Optimization on High Performance Computers

    NASA Astrophysics Data System (ADS)

    Alameda, J. C.

    2011-12-01

    Development and optimization of computational science models, particularly on high performance computers, and with the advent of ubiquitous multicore processor systems, practically on every system, has been accomplished with basic software tools, typically, command-line based compilers, debuggers, performance tools that have not changed substantially from the days of serial and early vector computers. However, model complexity, including the complexity added by modern message passing libraries such as MPI, and the need for hybrid code models (such as openMP and MPI) to be able to take full advantage of high performance computers with an increasing core count per shared memory node, has made development and optimization of such codes an increasingly arduous task. Additional architectural developments, such as many-core processors, only complicate the situation further. In this paper, we describe how our NSF-funded project, "SI2-SSI: A Productive and Accessible Development Workbench for HPC Applications Using the Eclipse Parallel Tools Platform" (WHPC) seeks to improve the Eclipse Parallel Tools Platform, an environment designed to support scientific code development targeted at a diverse set of high performance computing systems. Our WHPC project to improve Eclipse PTP takes an application-centric view to improve PTP. We are using a set of scientific applications, each with a variety of challenges, and using PTP to drive further improvements to both the scientific application, as well as to understand shortcomings in Eclipse PTP from an application developer perspective, to drive our list of improvements we seek to make. We are also partnering with performance tool providers, to drive higher quality performance tool integration. We have partnered with the Cactus group at Louisiana State University to improve Eclipse's ability to work with computational frameworks and extremely complex build systems, as well as to develop educational materials to incorporate into

  17. Development of a high performance parallel computing platform and its use in the study of nanostructures: Clusters, sheets and tubes

    NASA Astrophysics Data System (ADS)

    Gowtham, S.

    Small clusters of gallium oxide, technologically important high temperature ceramic, together with interaction of nucleic acid bases with graphene and small-diameter carbon nanotube are focus of first principles calculations in this work. A high performance parallel computing platform is also developed to perform these calculations at Michigan Tech. First principles calculations are based on density functional theory employing either local density or gradient-corrected approximation together with plane wave and Gaussian basis sets. The bulk Ga2O3 is known to be a very good candidate for fabricating electronic devices that operate at high temperatures. To explore the properties of Ga2O3 at nanoscale, we have performed a systematic theoretical study on the small polyatomic gallium oxide clusters. The calculated results find that all lowest energy isomers of GamO n clusters are dominated by the Ga-O bonds over the metal-metal or the oxygen-oxygen bonds. Analysis of atomic charges suggest the clusters to be highly ionic similar to the case of bulk Ga2O3. In the study of sequential oxidation of these clusters starting from Ga3O, it is found that the most stable isomers display up to four different backbones of constituent atoms. Furthermore, the predicted configuration of the ground state of Ga2O is recently confirmed by the experimental results of Neumark's group. Guided by the results of calculations the study of gallium oxide clusters, performance related challenge of computational simulations, of producing high performance computers/platforms, has been addressed. Several engineering aspects were thoroughly studied during the design, development and implementation of the high performance parallel computing platform, RAMA, at Michigan Tech. In an attempt to stay true to the principles of Beowulf revolution, the RAMA cluster was extensively customized to make it easy to understand, and use - for administrators as well as end-users. Following the results of benchmark

  18. An Overview of High-performance Parallel Big Data transfers over multiple network channels with Transport Layer Security (TLS) and TLS plus Perfect Forward Secrecy (PFS)

    SciTech Connect

    Fang, Chin; Corttrell, R. A.

    2015-05-06

    This Technical Note provides an overview of high-performance parallel Big Data transfers with and without encryption for data in-transit over multiple network channels. It shows that with the parallel approach, it is feasible to carry out high-performance parallel "encrypted" Big Data transfers without serious impact to throughput. But other impacts, e.g. the energy-consumption part should be investigated. It also explains our rationales of using a statistics-based approach for gaining understanding from test results and for improving the system. The presentation is of high-level nature. Nevertheless, at the end we will pose some questions and identify potentially fruitful directions for future work.

  19. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  20. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions. PMID:24732497

  1. Parallel implementation of inverse adding-doubling and Monte Carlo multi-layered programs for high performance computing systems with shared and distributed memory

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav; Li, Changying

    2015-09-01

    Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.

  2. Parallel-META 2.0: Enhanced Metagenomic Data Analysis with Functional Annotation, High Performance Computing and Advanced Visualization

    PubMed Central

    Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale. PMID:24595159

  3. Parallel-META 2.0: enhanced metagenomic data analysis with functional annotation, high performance computing and advanced visualization.

    PubMed

    Su, Xiaoquan; Pan, Weihua; Song, Baoxing; Xu, Jian; Ning, Kang

    2014-01-01

    The metagenomic method directly sequences and analyses genome information from microbial communities. The main computational tasks for metagenomic analyses include taxonomical and functional structure analysis for all genomes in a microbial community (also referred to as a metagenomic sample). With the advancement of Next Generation Sequencing (NGS) techniques, the number of metagenomic samples and the data size for each sample are increasing rapidly. Current metagenomic analysis is both data- and computation- intensive, especially when there are many species in a metagenomic sample, and each has a large number of sequences. As such, metagenomic analyses require extensive computational power. The increasing analytical requirements further augment the challenges for computation analysis. In this work, we have proposed Parallel-META 2.0, a metagenomic analysis software package, to cope with such needs for efficient and fast analyses of taxonomical and functional structures for microbial communities. Parallel-META 2.0 is an extended and improved version of Parallel-META 1.0, which enhances the taxonomical analysis using multiple databases, improves computation efficiency by optimized parallel computing, and supports interactive visualization of results in multiple views. Furthermore, it enables functional analysis for metagenomic samples including short-reads assembly, gene prediction and functional annotation. Therefore, it could provide accurate taxonomical and functional analyses of the metagenomic samples in high-throughput manner and on large scale.

  4. Design of high-performing hybrid meta-heuristics for unrelated parallel machine scheduling with machine eligibility and precedence constraints

    NASA Astrophysics Data System (ADS)

    Afzalirad, Mojtaba; Rezaeian, Javad

    2016-04-01

    This study involves an unrelated parallel machine scheduling problem in which sequence-dependent set-up times, different release dates, machine eligibility and precedence constraints are considered to minimize total late works. A new mixed-integer programming model is presented and two efficient hybrid meta-heuristics, genetic algorithm and ant colony optimization, combined with the acceptance strategy of the simulated annealing algorithm (Metropolis acceptance rule), are proposed to solve this problem. Manifestly, the precedence constraints greatly increase the complexity of the scheduling problem to generate feasible solutions, especially in a parallel machine environment. In this research, a new corrective algorithm is proposed to obtain the feasibility in all stages of the algorithms. The performance of the proposed algorithms is evaluated in numerical examples. The results indicate that the suggested hybrid ant colony optimization statistically outperformed the proposed hybrid genetic algorithm in solving large-size test problems.

  5. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus. PMID:26575558

  6. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    PubMed

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-01

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  7. Making resonance a common case: a high-performance implementation of collective I/O on parallel file systems

    SciTech Connect

    Davis, Marion Kei; Zhang, Xuechen; Jiang, Song

    2009-01-01

    Collective I/O is a widely used technique to improve I/O performance in parallel computing. It can be implemented as a client-based or server-based scheme. The client-based implementation is more widely adopted in MPI-IO software such as ROMIO because of its independence from the storage system configuration and its greater portability. However, existing implementations of client-side collective I/O do not take into account the actual pattern offile striping over multiple I/O nodes in the storage system. This can cause a significant number of requests for non-sequential data at I/O nodes, substantially degrading I/O performance. Investigating the surprisingly high I/O throughput achieved when there is an accidental match between a particular request pattern and the data striping pattern on the I/O nodes, we reveal the resonance phenomenon as the cause. Exploiting readily available information on data striping from the metadata server in popular file systems such as PVFS2 and Lustre, we design a new collective I/O implementation technique, resonant I/O, that makes resonance a common case. Resonant I/O rearranges requests from multiple MPI processes to transform non-sequential data accesses on I/O nodes into sequential accesses, significantly improving I/O performance without compromising the independence ofa client-based implementation. We have implemented our design in ROMIO. Our experimental results show that the scheme can increase I/O throughput for some commonly used parallel I/O benchmarks such as mpi-io-test and ior-mpi-io over the existing implementation of ROMIO by up to 157%, with no scenario demonstrating significantly decreased performance.

  8. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    PubMed

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  9. Transverse emittance dilution due to coupler kicks in linear accelerators

    NASA Astrophysics Data System (ADS)

    Buckley, Brandon; Hoffstaetter, Georg H.

    2007-11-01

    One of the main concerns in the design of low emittance linear accelerators (linacs) is the preservation of beam emittance. Here we discuss one possible source of emittance dilution, the coupler kick, due to transverse electromagnetic fields in the accelerating cavities of the linac caused by the power coupler geometry. In addition to emittance growth, the coupler kick also produces orbit distortions. It is common wisdom that emittance growth from coupler kicks can be strongly reduced by using two couplers per cavity mounted opposite each other or by having the couplers of successive cavities alternate from above to below the beam pipe so as to cancel each individual kick. While this is correct, including two couplers per cavity or alternating the coupler location requires large technical changes and increased cost for superconducting cryomodules where cryogenic pipes are arranged parallel to a string of several cavities. We therefore analyze consequences of alternate coupler placements. We show here that alternating the coupler location from above to below compensates the emittance growth as well as the orbit distortions. For sufficiently large Q values, alternating the coupler location from before to after the cavity leads to a cancellation of the orbit distortion but not of the emittance growth, whereas alternating the coupler location from before and above to behind and below the cavity cancels the emittance growth but not the orbit distortion. We show that cancellations hold for sufficiently large Q values. These compensations hold even when each cavity is individually detuned, e.g., by microphonics. Another effective method for reducing coupler kicks that is studied is the optimization of the phase of the coupler kick so as to minimize the effects on emittance from each coupler. This technique is independent of the coupler geometry but relies on operating on crest. A final technique studied is symmetrization of the cavity geometry in the coupler region with

  10. High Performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush

    1994-01-01

    High performance FORTRAN is a set of extensions for FORTRAN 90 designed to allow specification of data parallel algorithms. The programmer annotates the program with distribution directives to specify the desired layout of data. The underlying programming model provides a global name space and a single thread of control. Explicitly parallel constructs allow the expression of fairly controlled forms of parallelism in particular data parallelism. Thus the code is specified in a high level portable manner with no explicit tasking or communication statements. The goal is to allow architecture specific compilers to generate efficient code for a wide variety of architectures including SIMD, MIMD shared and distributed memory machines.

  11. Multimode Directional Coupler

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N. (Inventor); Wintucky, Edwin G. (Inventor)

    2016-01-01

    A multimode directional coupler is provided. In some embodiments, the multimode directional coupler is configured to receive a primary signal and a secondary signal at a first port of a primary waveguide. The primary signal is configured to propagate through the primary waveguide and be outputted at a second port of the primary waveguide. The multimode directional coupler also includes a secondary waveguide configured to couple the secondary signal from the primary waveguide with no coupling of the primary signal into the secondary waveguide. The secondary signal is configured to propagate through the secondary waveguide and be outputted from a port of the secondary waveguide.

  12. Aluminum nitride grating couplers.

    PubMed

    Ghosh, Siddhartha; Doerr, Christopher R; Piazza, Gianluca

    2012-06-10

    Grating couplers in sputtered aluminum nitride, a piezoelectric material with low loss in the C band, are demonstrated. Gratings and a waveguide micromachined on a silicon wafer with 600 nm minimum feature size were defined in a single lithography step without partial etching. Silicon dioxide (SiO(2)) was used for cladding layers. Peak coupling efficiency of -6.6 dB and a 1 dB bandwidth of 60 nm have been measured. This demonstration of wire waveguides and wideband grating couplers in a material that also has piezoelectric and elasto-optic properties will enable new functions for integrated photonics and optomechanics.

  13. Element of an inductive coupler

    DOEpatents

    Hall, David R.; Fox, Joe

    2006-08-15

    An element for an inductive coupler in a downhole component comprises magnetically conductive material, which is disposed in a recess in annular housing. The magnetically conductive material forms a generally circular trough. The circular trough comprises an outer generally U-shaped surface, an inner generally U-shaped surface, and two generally planar surfaces joining the inner and outer surfaces. The element further comprises pressure relief grooves in at least one of the surfaces of the circular trough. The pressure relief grooves may be scored lines. Preferably the pressure relief grooves are parallel to the magnetic field generated by the magnetically conductive material. The magnetically conductive material is selected from the group consisting of soft iron, ferrite, a nickel iron alloy, a silicon iron alloy, a cobalt iron alloy, and a mu-metal. Preferably, the annular housing is a metal ring.

  14. Universal grating coupler design

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Flueckiger, Jonas; Lin, Charlie; Chrostowski, Lukas

    2013-10-01

    A universal design methodology for grating couplers based on the silicon-on-insultator platform is presented in this paper. Our design methodology accomodates various etch depths, silicon thickness (e.g., 220 nm, 300 nm), incident angles, and cladding materials (e.g., silicon oxide or air), and has been verified by simulations and measurement results. Further more, the design methodology presented can be applied to a wide range, from 1260 nm to 1675 nm, of wavelengths.

  15. HOM/LOM Coupler Study for the ILC Crab Cavity

    SciTech Connect

    Xiao, L.; Li, Z.; Ko, K.; /SLAC

    2007-04-16

    The FNAL 9-cell 3.9GHz deflecting mode cavity designed for the CKM experiment was chosen as the baseline design for the ILC BDS crab cavity. The full 9-cell CKM cavity including the coupler end-groups was simulated using the parallel eigensolver Omega3P and scattering parameter solver S3P. It was found that both the notch filters for the HOM/LOM couplers are very sensitive to the notch gap, which is about 1.6MHz/micron and is more than 10 times more sensitive than the TTF cavity. It was also found in the simulation that the unwanted vertical {pi}-mode (SOM) is strongly coupled to the horizontal 7{pi}/9 mode which causes x-y coupling and reduces the effectiveness of the SOM damping. To meet the ILC requirements, the HOM/LOM couplers are redesigned to address these issues. With the new designs, the damping of the HOM/LOM modes is improved. The sensitivity of the notch filter for the HOM coupler is reduced by one order of magnitude. The notch filter for the LOM coupler is eliminated in the new design which significantly simplifies the geometry. In this paper, we will present the simulation results of the original CKM cavity and the progresses on the HOM/LOM coupler re-design and optimization.

  16. Microwave coupler and method

    DOEpatents

    Holcombe, Cressie E.

    1985-01-01

    The present invention is directed to a microwave coupler for enhancing the heating or metallurgical treatment of materials within a cold-wall, rapidly heated cavity as provided by a microwave furnace. The coupling material of the present invention is an alpha-rhombohedral-boron-derivative-structure material such as boron carbide or boron silicide which can be appropriately positioned as a susceptor within the furnace to heat other material or be in powder particulate form so that composites and structures of boron carbide such as cutting tools, grinding wheels and the like can be rapidly and efficiently formed within microwave furnaces.

  17. Microwave coupler and method

    DOEpatents

    Holcombe, C.E.

    1984-11-29

    The present invention is directed to a microwave coupler for enhancing the heating or metallurgical treatment of materials within a cold-wall, rapidly heated cavity as provided by a microwave furnace. The coupling material of the present invention is an alpha-rhombohedral-boron-derivative-structure material such as boron carbide or boron silicide which can be appropriately positioned as a susceptor within the furnace to heat other material or be in powder particulate form so that composites and structures of boron carbide such as cutting tools, grinding wheels and the like can be rapidly and efficiently formed within microwave furnaces.

  18. Multimode waveguide based directional coupler

    NASA Astrophysics Data System (ADS)

    Ahmed, Rajib; Rifat, Ahmmed A.; Sabouri, Aydin; Al-Qattan, Bader; Essa, Khamis; Butt, Haider

    2016-07-01

    The Silicon-on-Insulator (SOI) based platform overcomes limitations of the previous copper and fiber based technologies. Due to its high index difference, SOI waveguide (WG) and directional couplers (DC) are widely used for high speed optical networks and hybrid Electro-Optical inter-connections; TE00-TE01, TE00-TE00 and TM00-TM00 SOI direction couplers are designed with symmetrical and asymmetrical configurations to couple with TE00, TE01 and TM00 in a multi-mode semi-triangular ring-resonator configuration which will be applicable for multi-analyte sensing. Couplers are designed with effective index method and their structural parameters are optimized with consideration to coupler length, wavelength and polarization dependence. Lastly, performance of the couplers are analyzed in terms of cross-talk, mode overlap factor, coupling length and coupling efficiency.

  19. Mid-IR fused fiber couplers

    NASA Astrophysics Data System (ADS)

    Stevens, G.; Woodbridge, T.

    2016-03-01

    We present results from our recent efforts on developing single-mode fused couplers in ZBLAN fibre. We have developed a custom fusion workstation for working with lower melting temperature fibres, such as ZBLAN and chalcogenide fibres. Our workstation uses a precisely controlled electrical heater designed to operate at temperatures between 100 - 250°C as our heat source. The heated region of the fibers was also placed in an inert atmosphere to avoid the formation of microcrystal inclusions during fusion. We firstly developed a process for pulling adiabatic tapers in 6/125 μm ZBLAN fibre. The tapers were measured actively during manufacture using a 2000 nm source. The process was automated so that the heater temperature and motor speed automatically adjusted to pull the taper at constant tension. This process was then further developed so that we could fuse and draw two parallel 6/125 μm ZBLAN fibres, forming a single-mode coupler. Low ratio couplers (1-10%) that could be used as power monitors were manufactured that had an excess loss of 0.76 dB. We have also manufactured 50/50 splitters and wavelength division multiplexers (WDMs). However, the excess loss of these devices was typically 2 - 3 dB. The increased losses were due to localised necking and surface defects forming as the tapers were pulled further to achieve a greater coupling ratio. Initial experiments with chalcogenide fibre have shown that our process can be readily adapted for chalcogenide fibres. A 5% coupler with 1.5 dB insertion loss was manufactured using commercial of the shelf (COTS) fibres.

  20. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    NASA Astrophysics Data System (ADS)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  1. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    PubMed

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  2. Toward fully automated high performance computing drug discovery: a massively parallel virtual screening pipeline for docking and molecular mechanics/generalized Born surface area rescoring to improve enrichment.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2014-01-27

    In this work we announce and evaluate a high throughput virtual screening pipeline for in-silico screening of virtual compound databases using high performance computing (HPC). Notable features of this pipeline are an automated receptor preparation scheme with unsupervised binding site identification. The pipeline includes receptor/target preparation, ligand preparation, VinaLC docking calculation, and molecular mechanics/generalized Born surface area (MM/GBSA) rescoring using the GB model by Onufriev and co-workers [J. Chem. Theory Comput. 2007, 3, 156-169]. Furthermore, we leverage HPC resources to perform an unprecedented, comprehensive evaluation of MM/GBSA rescoring when applied to the DUD-E data set (Directory of Useful Decoys: Enhanced), in which we selected 38 protein targets and a total of ∼0.7 million actives and decoys. The computer wall time for virtual screening has been reduced drastically on HPC machines, which increases the feasibility of extremely large ligand database screening with more accurate methods. HPC resources allowed us to rescore 20 poses per compound and evaluate the optimal number of poses to rescore. We find that keeping 5-10 poses is a good compromise between accuracy and computational expense. Overall the results demonstrate that MM/GBSA rescoring has higher average receiver operating characteristic (ROC) area under curve (AUC) values and consistently better early recovery of actives than Vina docking alone. Specifically, the enrichment performance is target-dependent. MM/GBSA rescoring significantly out performs Vina docking for the folate enzymes, kinases, and several other enzymes. The more accurate energy function and solvation terms of the MM/GBSA method allow MM/GBSA to achieve better enrichment, but the rescoring is still limited by the docking method to generate the poses with the correct binding modes. PMID:24358939

  3. Toward fully automated high performance computing drug discovery: a massively parallel virtual screening pipeline for docking and molecular mechanics/generalized Born surface area rescoring to improve enrichment.

    PubMed

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2014-01-27

    In this work we announce and evaluate a high throughput virtual screening pipeline for in-silico screening of virtual compound databases using high performance computing (HPC). Notable features of this pipeline are an automated receptor preparation scheme with unsupervised binding site identification. The pipeline includes receptor/target preparation, ligand preparation, VinaLC docking calculation, and molecular mechanics/generalized Born surface area (MM/GBSA) rescoring using the GB model by Onufriev and co-workers [J. Chem. Theory Comput. 2007, 3, 156-169]. Furthermore, we leverage HPC resources to perform an unprecedented, comprehensive evaluation of MM/GBSA rescoring when applied to the DUD-E data set (Directory of Useful Decoys: Enhanced), in which we selected 38 protein targets and a total of ∼0.7 million actives and decoys. The computer wall time for virtual screening has been reduced drastically on HPC machines, which increases the feasibility of extremely large ligand database screening with more accurate methods. HPC resources allowed us to rescore 20 poses per compound and evaluate the optimal number of poses to rescore. We find that keeping 5-10 poses is a good compromise between accuracy and computational expense. Overall the results demonstrate that MM/GBSA rescoring has higher average receiver operating characteristic (ROC) area under curve (AUC) values and consistently better early recovery of actives than Vina docking alone. Specifically, the enrichment performance is target-dependent. MM/GBSA rescoring significantly out performs Vina docking for the folate enzymes, kinases, and several other enzymes. The more accurate energy function and solvation terms of the MM/GBSA method allow MM/GBSA to achieve better enrichment, but the rescoring is still limited by the docking method to generate the poses with the correct binding modes.

  4. A microfiber coupler tip thermometer.

    PubMed

    Ding, Ming; Wang, Pengfei; Brambilla, Gilberto

    2012-02-27

    A compact thermometer based on a broadband microfiber coupler tip is demonstrated. This sensor can measure a broad temperature interval ranging from room temperature to 1283 °C with sub-200 µm spatial resolution. An average sensitivity of 11.96 pm/°C was achieved for a coupler tip with ~2.5 µm diameter. This is the highest temperature measured with a silica optical fiber device.

  5. Polymer waveguide with tunable optofluidic couplers for card-to-backplane optical interconnects

    NASA Astrophysics Data System (ADS)

    Jiang, Guomin; Baig, Sarfaraz; Wang, Michael R.

    2014-03-01

    Polymeric waveguides with tunable optofluidic couplers are fabricated by the vacuum assisted microfluidic technique for card-to-backplane optical interconnect applications. The optofluidic coupler on a backplane consists of polymer waveguides and a perpendicular microfluidic channel with inclined sidewalls. An index matching liquid and air bubbles are located in the microfluidic hollow channel. The activation or deactivation of the surface normal coupling of the optofluidic coupler is accomplished by setting air bubbles or index matching liquid to be in contact with the waveguide mirrors. 10 Gbps eye diagrams of the card-to-backplane optical interconnect link have been demonstrated showing the high performance of the interconnect system.

  6. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    SciTech Connect

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  7. Application of parallel gradient high performance liquid chromatography with ultra-violet, evaporative light scattering and electrospray mass spectrometric detection for the quantitative quality control of the compound file to support pharmaceutical discovery.

    PubMed

    Squibb, Anthony W; Taylor, Mark R; Parnas, Barry L; Williams, Gareth; Girdler, Richard; Waghorn, Peter; Wright, Adrian G; Pullen, Frank S

    2008-05-01

    The success of drug discovery assays, using plate-based technologies, relies heavily on the quality of the substrates being tested. Sample purity, identity and concentration must be assured for a screening hit to be validated. Most major pharmaceutical companies maintain large liquid screening files with often in excess of one million stock solutions, typically dissolved in DMSO. However, due to the inherent inaccuracies of high-throughput gravimetric analysis and automated dilution, stock solution concentrations can vary significantly from the assumed nominal value. Here, we present a rapid and effective method for measuring purity, identity and concentration of these stock solutions using four high-performance liquid chromatography (HPLC) columns with parallel ultraviolet spectrophotometry (UV), electrospray ionisation mass spectrometry (ESI-MS) and evaporative light scattering detection (ELSD) with a throughput of 1 min per sample.

  8. 30 CFR 75.805 - Couplers.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SAFETY STANDARDS-UNDERGROUND COAL MINES Underground High-Voltage Distribution § 75.805 Couplers. Couplers that are used with medium-voltage or high-voltage power circuits shall be of the three-phase type...

  9. Efficient waveguide coupler based on metal materials

    NASA Astrophysics Data System (ADS)

    Wu, Wenjun; Yang, Junbo; Chang, Shengli; Zhang, Jingjing; Lu, Huanyu

    2015-10-01

    Because of the diffraction limit of light, the scale of optical element stays in the order of wavelength, which makes the interface optics and nano-electronic components cannot be directly matched, thus the development of photonics technology encounters a bottleneck. In order to solve the problem that coupling of light into the subwavelength waveguide, this paper proposes a model of coupler based on metal materials. By using Surface Plasmon Polaritons (SPPs) wave, incident light can be efficiently coupled into waveguide of diameter less than 100 nm. This paper mainly aims at near infrared wave band, and tests a variety of the combination of metal materials, and by changing the structural parameters to get the maximum coupling efficiency. This structure splits the plane incident light with wavelength of 864 nm, the width of 600 nm into two uniform beams, and separately coupled into the waveguide layer whose width is only about 80 nm, and the highest coupling efficiency can reach above 95%. Using SPPs structure will be an effective method to break through the diffraction limit and implement photonics device high-performance miniaturization. We can further compress the light into small scale fiber or waveguide by using the metal coupler, and to save the space to hold more fiber or waveguide layer, so that we can greatly improve the capacity of optical communication. In addition, high-performance miniaturization of the optical transmission medium can improve the integration of optical devices, also provide a feasible solution for the photon computer research and development in the future.

  10. Power coupler for the ILC crab cavity

    SciTech Connect

    Burt, G.; Dexter, A.; Jenkins, R.; Beard, C.; Goudket, P.; McIntosh, P.A.; Bellantoni, Leo; /Fermilab

    2007-06-01

    The ILC crab cavity will require the design of an appropriate power coupler. The beam-loading in dipole mode cavities is considerably more variable than accelerating cavities, hence simulations have been performed to establish the required external Q. Simulations of a suitable coupler were then performed and were verified using a normal conducting prototype with variable coupler tips.

  11. 30 CFR 75.805 - Couplers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... SAFETY STANDARDS-UNDERGROUND COAL MINES Underground High-Voltage Distribution § 75.805 Couplers. Couplers that are used with medium-voltage or high-voltage power circuits shall be of the three-phase type with... for the voltage and current expected. All exposed metal on the metallic couplers shall be grounded...

  12. 30 CFR 75.805 - Couplers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Couplers. 75.805 Section 75.805 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR COAL MINE SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Underground High-Voltage Distribution § 75.805 Couplers. Couplers that are used with medium-voltage...

  13. 30 CFR 75.805 - Couplers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... SAFETY STANDARDS-UNDERGROUND COAL MINES Underground High-Voltage Distribution § 75.805 Couplers. Couplers that are used with medium-voltage or high-voltage power circuits shall be of the three-phase type with... for the voltage and current expected. All exposed metal on the metallic couplers shall be grounded...

  14. 30 CFR 75.805 - Couplers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... SAFETY STANDARDS-UNDERGROUND COAL MINES Underground High-Voltage Distribution § 75.805 Couplers. Couplers that are used with medium-voltage or high-voltage power circuits shall be of the three-phase type with... for the voltage and current expected. All exposed metal on the metallic couplers shall be grounded...

  15. Wireless power transfer magnetic couplers

    DOEpatents

    Wu, Hunter; Gilchrist, Aaron; Sealy, Kylee

    2016-01-19

    A magnetic coupler is disclosed for wireless power transfer systems. A ferrimagnetic component is capable of guiding a magnetic field. A wire coil is wrapped around at least a portion of the ferrimagnetic component. A screen is capable of blocking leakage magnetic fields. The screen may be positioned to cover at least one side of the ferrimagnetic component and the coil. A distance across the screen may be at least six times an air gap distance between the ferrimagnetic component and a receiving magnetic coupler.

  16. Birefringent-fiber polarization coupler

    NASA Astrophysics Data System (ADS)

    Youngquist, R. C.; Brooks, J. L.; Shaw, H. J.

    1983-12-01

    Periodically stressing a birefringent fiber once per beat length can cause coherent coupling to occur between polarization modes. Such a birefringent-fiber polarization coupler is described here. More than 30 dB of power transfer between polarizations has been achieved. The device has been used as the output coupler of an in-line Mach-Zehnder interferometer, and better than 25-dB on/off extinction has been measured. The device is wavelength selective and can be used as a multiplexer or as a notch filter. A notch of 9-nm full width at half-maximum has been achieved with a 60-period comb structure.

  17. High Performance Tools And Technologies

    SciTech Connect

    Collette, M R; Corey, I R; Johnson, J R

    2005-01-24

    This goal of this project was to evaluate the capability and limits of current scientific simulation development tools and technologies with specific focus on their suitability for use with the next generation of scientific parallel applications and High Performance Computing (HPC) platforms. The opinions expressed in this document are those of the authors, and reflect the authors' current understanding and functionality of the many tools investigated. As a deliverable for this effort, we are presenting this report describing our findings along with an associated spreadsheet outlining current capabilities and characteristics of leading and emerging tools in the high performance computing arena. This first chapter summarizes our findings (which are detailed in the other chapters) and presents our conclusions, remarks, and anticipations for the future. In the second chapter, we detail how various teams in our local high performance community utilize HPC tools and technologies, and mention some common concerns they have about them. In the third chapter, we review the platforms currently or potentially available to utilize these tools and technologies on to help in software development. Subsequent chapters attempt to provide an exhaustive overview of the available parallel software development tools and technologies, including their strong and weak points and future concerns. We categorize them as debuggers, memory checkers, performance analysis tools, communication libraries, data visualization programs, and other parallel development aides. The last chapter contains our closing information. Included with this paper at the end is a table of the discussed development tools and their operational environment.

  18. Side polished twin-core fiber coupler

    NASA Astrophysics Data System (ADS)

    Wang, Xianbin; Yuan, Libo

    2015-07-01

    A novel optical fiber coupler was proposed and fabricated for coupling each core of a twin-core fiber (TCF) with a single-core fiber (SCF) core simultaneously and accessing independently both cores of the TCF. The coupler is mainly composed of two sides polished SCF and a side polished TCF. Each optical field launched from the TCF could be coupled into the side polished SCF. The coupler has a simple structure and less cross-talk between the two cores.

  19. Fabrication Of Fiber-Optic Waveguide Coupler

    NASA Technical Reports Server (NTRS)

    Goss, Willis; Nelson, Mark D.; Mclauchlan, John M.

    1989-01-01

    Technique for making four-port, single-mode fiber-optic waveguide couplers requires no critically-precise fabrication operations or open-loop processes. Waveguide couplers analogous to beam-splitter prisms. Essential in many applications that require coherent separation or combination of two waves; for example, for interferometric purposes. Components of optical waveguide coupler held by paraffin on microscope slide while remaining cladding of two optical fibers fused together by arc welding.

  20. RF Power and HOM Coupler Tutorial

    SciTech Connect

    Rusnak, B

    2003-10-28

    Radio frequency (RF) couplers are used on superconducting cavities to deliver RF power for creating accelerating fields and to remove unwanted higher-order mode power for reducing emittance growth and cryogenic load. RF couplers in superconducting applications present a number of interdisciplinary design challenges that need to be addressed, since poor performance in these devices can profoundly impact accelerator operations and the overall success of a major facility. This paper will focus on critical design issues for fundamental and higher order mode (HOM) power couplers, highlight a sampling of reliability-related problems observed in couplers, and discuss some design strategies for improving performance.

  1. High performance polymer development

    NASA Technical Reports Server (NTRS)

    Hergenrother, Paul M.

    1991-01-01

    The term high performance as applied to polymers is generally associated with polymers that operate at high temperatures. High performance is used to describe polymers that perform at temperatures of 177 C or higher. In addition to temperature, other factors obviously influence the performance of polymers such as thermal cycling, stress level, and environmental effects. Some recent developments at NASA Langley in polyimides, poly(arylene ethers), and acetylenic terminated materials are discussed. The high performance/high temperature polymers discussed are representative of the type of work underway at NASA Langley Research Center. Further improvement in these materials as well as the development of new polymers will provide technology to help meet NASA future needs in high performance/high temperature applications. In addition, because of the combination of properties offered by many of these polymers, they should find use in many other applications.

  2. High Performance Polymers

    NASA Technical Reports Server (NTRS)

    Venumbaka, Sreenivasulu R.; Cassidy, Patrick E.

    2003-01-01

    This report summarizes results from research on high performance polymers. The research areas proposed in this report include: 1) Effort to improve the synthesis and to understand and replicate the dielectric behavior of 6HC17-PEK; 2) Continue preparation and evaluation of flexible, low dielectric silicon- and fluorine- containing polymers with improved toughness; and 3) Synthesis and characterization of high performance polymers containing the spirodilactam moiety.

  3. 49 CFR 215.123 - Defective couplers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Defective couplers. 215.123 Section 215.123 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION....123 Defective couplers. A railroad may not place or continue in service a car, if— (a) The car...

  4. 49 CFR 215.123 - Defective couplers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Defective couplers. 215.123 Section 215.123 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION....123 Defective couplers. A railroad may not place or continue in service a car, if— (a) The car...

  5. 49 CFR 215.123 - Defective couplers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Defective couplers. 215.123 Section 215.123 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION....123 Defective couplers. A railroad may not place or continue in service a car, if— (a) The car...

  6. 49 CFR 215.123 - Defective couplers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Defective couplers. 215.123 Section 215.123 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION RAILROAD FREIGHT CAR SAFETY STANDARDS Freight Car Components Draft System § 215.123 Defective couplers. A railroad may...

  7. 49 CFR 215.123 - Defective couplers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... automatically with the adjacent car; (b) The car has a coupler that has a crack in the highly stressed junction... knuckle that is broken or cracked on the inside pulling face of the knuckle. (d) The car has a knuckle pin or knuckle thrower that is: (1) Missing; or (2) Inoperative; or (e) The car has a coupler...

  8. High Power Co-Axial Coupler

    SciTech Connect

    Neubauer, M.; Dudas, A.; Rimmer, Robert A.; Guo, Jiquan; Williams, R. Scott

    2013-12-01

    A very high power Coax RF Coupler (MW-Level) is very desirable for a number of accelerator and commercial applications. For example, the development of such a coupler operating at 1.5 GHz may permit the construction of a higher-luminosity version of the Electron-Ion Collider (EIC) being planned at JLab. Muons, Inc. is currently funded by a DOE STTR grant to develop a 1.5-GHz high-power doublewindowcoax coupler with JLab (about 150 kW). Excellent progress has been made on this R&D project, so we propose an extension of this development to build a very high power coax coupler (MW level peak power and a max duty factor of about 4%). The dimensions of the current coax coupler will be scaled up to provide higher power capability.

  9. Optical Waveguide Output Couplers Fabricated in Polymers

    NASA Technical Reports Server (NTRS)

    Watson, Michael D.; Abushagur, Mustafa A. G.; Ashley, Paul R.; Johnson-Cole, Helen

    1998-01-01

    Waveguide output couplers fabricated in Norland Optical Adhesive (NOA) #81 and AMOCO Ultradel 9020D polyimide are investigated. The output couplers are implemented using periodic relief gratings on a planar waveguide. Design theory of the couplers is based on the perturbation approach. Coupling of light from waveguide propagation modes to output radiation modes is described by coupled mode theory and the transmission line approximation of the perturbed area (grating structure). Using these concepts, gratings can be accurately designed to output a minimum number of modes at desired output angles. Waveguide couplers were designed using these concepts. These couplers were fabricated and analyzed for structural accuracy, output beam accuracy, and output efficiency. The results for the two different materials are compared.

  10. High performance systems

    SciTech Connect

    Vigil, M.B.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  11. High-Performance Happy

    ERIC Educational Resources Information Center

    O'Hanlon, Charlene

    2007-01-01

    Traditionally, the high-performance computing (HPC) systems used to conduct research at universities have amounted to silos of technology scattered across the campus and falling under the purview of the researchers themselves. This article reports that a growing number of universities are now taking over the management of those systems and…

  12. High performance polymeric foams

    SciTech Connect

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-08-28

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy.

  13. Nanoscale plasmonic coupler with tunable direction and intensity ratio controlled by optical vortex

    NASA Astrophysics Data System (ADS)

    Liu, Ting; Wang, Shouyu

    2016-09-01

    Plasmonic couplers with tunable direction and intensity ratio controlled by exciting optical vortex are proposed in this paper. The nanoscale structure is rather simple only composed of two thin parallel slits. By modulating topological charges and sizes of the exciting optical vortex, different coupling directions and various directional coupling ratios can be obtained with fixed structure. The proposed plasmonic structure is not only a controllable plasmonic coupler but also a topological charge detector which can determine the direction of phase twisting in a wide range. It is believed that the extremely compact structure can be potentially used in future logic photonic and plasmonic systems.

  14. High Performance Medical Classifiers

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Bekakos, M. P.

    2009-08-01

    In this paper, parallelism methodologies for the mapping of machine learning algorithms derived rules on both software and hardware are investigated. Feeding the input of these algorithms with patient diseases data, medical diagnostic decision trees and their corresponding rules are outputted. These rules can be mapped on multithreaded object oriented programs and hardware chips. The programs can simulate the working of the chips and can exhibit the inherent parallelism of the chips design. The circuit of a chip can consist of many blocks, which are operating concurrently for various parts of the whole circuit. Threads and inter-thread communication can be used to simulate the blocks of the chips and the combination of block output signals. The chips and the corresponding parallel programs constitute medical classifiers, which can classify new patient instances. Measures taken from the patients can be fed both into chips and parallel programs and can be recognized according to the classification rules incorporated in the chips and the programs design. The chips and the programs constitute medical decision support systems and can be incorporated into portable micro devices, assisting physicians in their everyday diagnostic practice.

  15. Overview of recent HOM coupler development

    SciTech Connect

    Xiao, B.

    2015-09-13

    Higher Order Mode (HOM) damping is important for SRF applications, especially for high-intensity machines. A good HOM damping design will help to reduce power load to the cryogenic system and to reduce the risk of beam breakup. The design of HOM damping, including antenna/loop HOM couplers, beam pipe HOM absorbers and waveguide HOM couplers, is to solve a multi-physics problem that involves RF, thermal, mechanical, and beam-cavity interaction issues. In this talk, the author provides an overview on the latest advances of the HOM couplers for high-intensity SRF applications.

  16. Numerical simulation of coupler cavities for linacs

    SciTech Connect

    Ng, C.K.; Derutyer, H.; Ko, K.

    1993-04-01

    We present numerical procedures involved in the evaluation of the performance of coupler cavities for linacs. The MAFIA code is used to simulate an X-Band accelerator section in the time domain. The input/output coupler cavities for the structure arc of the symmetrical double-input design. We calculate the transmission properties of the coupler and compare the results with measurements. We compare the performance of the symmetrical double-input design with that of the conventional single-input type by evaluating the field amplitude and phase asymmetries. We also evaluate the peak field gradient in the computer.

  17. Miniature mechanical transfer optical coupler

    DOEpatents

    Abel, Philip; Watterson, Carl

    2011-02-15

    A miniature mechanical transfer (MT) optical coupler ("MMTOC") for optically connecting a first plurality of optical fibers with at least one other plurality of optical fibers. The MMTOC may comprise a beam splitting element, a plurality of collimating lenses, and a plurality of alignment elements. The MMTOC may optically couple a first plurality of fibers disposed in a plurality of ferrules of a first MT connector with a second plurality of fibers disposed in a plurality of ferrules of a second MT connector and a third plurality of fibers disposed in a plurality of ferrules of a third MT connector. The beam splitting element may allow a portion of each beam of light from the first plurality of fibers to pass through to the second plurality of fibers and simultaneously reflect another portion of each beam of light from the first plurality of fibers to the third plurality of fibers.

  18. High Performance Proactive Digital Forensics

    NASA Astrophysics Data System (ADS)

    Alharbi, Soltan; Moa, Belaid; Weber-Jahnke, Jens; Traore, Issa

    2012-10-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  19. Experimental demonstration of ultra-wideband and high-efficiency terahertz spoof surface plasmon polaritons coupler

    NASA Astrophysics Data System (ADS)

    Tang, Heng-He; Ma, Tian-Jun; Liu, Pu-Kun

    2016-05-01

    Spoof surface plasmon polaritons (SSPPs) are promising for subwavelength waveguiding in the terahertz (THz) frequency range. However, they cannot be efficiently excited from spatial propagating or guided waves due to the mismatched momenta. In this paper, a THz coupler is designed to smoothly bridge SSPPs and guided (or propagating) waves. By using a tapered parallel-plate waveguide, the incident energies are efficiently compressed and coupled into a subwavelength gap. Then, the momenta differences are mitigated with a graded grating. The numerical simulations show that the relative bandwidth of the coupler reaches up to 127%, and the maximum coupling efficiency is 99%. More importantly, experiment results in the 0.22 THz-0.33 THz frequency range are also presented to verify the good performance of the coupler. The work provides a technical support for terahertz waveguiding.

  20. Microfabrication of pre-aligned fiber bundle couplers using ultraviolet lithography of SU-8.

    PubMed

    Yang, Ren; Soper, Steven A; Wang, Wanjun

    2006-01-01

    This paper describes the design, microfabrication and testing of a pre-aligned array of fiber couplers using direct UV-lithography of SU-8. The fiber coupler array includes an out-of-plane refractive microlens array and two fiberport collimator arrays. With the optical axis of the pixels parallel to the substrate, each pixel of the microlens array can be pre-aligned with the corresponding pixels of the fiberport collimator array as defined by the lithography mask design. This out-of-plane polymer 3D microlens array is pre-aligned with the fiber collimator arrays with no additional adjustment and assembly required, therefore, it helps to dramatically reduce the running cost and improve the alignment quality and coupling efficiency. In addition, the experimental results for the fiber couplers are also presented and analyzed.

  1. High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Talcott, Stephen

    High performance liquid chromatography (HPLC) has many applications in food chemistry. Food components that have been analyzed with HPLC include organic acids, vitamins, amino acids, sugars, nitrosamines, certain pesticides, metabolites, fatty acids, aflatoxins, pigments, and certain food additives. Unlike gas chromatography, it is not necessary for the compound being analyzed to be volatile. It is necessary, however, for the compounds to have some solubility in the mobile phase. It is important that the solubilized samples for injection be free from all particulate matter, so centrifugation and filtration are common procedures. Also, solid-phase extraction is used commonly in sample preparation to remove interfering compounds from the sample matrix prior to HPLC analysis.

  2. Folded waveguide coupler for ion cyclotron heating

    SciTech Connect

    Owens, T.L.; Chen, G.L.

    1986-01-01

    A new type of waveguide coupler for plasma heating in the ion cyclotron range of frequencies is described. The coupler consists of a series of interleaved metallic vanes within a rectangular enclosure analogous to a wide rectangular waveguide that has been ''folded'' several times. At the mouth of the coupler, a plate is attached which contains coupling apertures in each fold or every other fold of the waveguide, depending upon the wavenumber spectrum desired. This plate serves primarily as a wave field polarizer that converts coupler fields to the polarization of the fast magnetosonic wave within the plasma. Theoretical estimates indicate that the folded waveguide is capable of high-efficiency, multimegawatt operation into a plasma. Bench tests have verified the predicted field structure within the waveguide in preparation for high-power tests on the Radio Frequency Test Facility at the Oak Ridge National Laboratory.

  3. The High Performance Storage System

    SciTech Connect

    Coyne, R.A.; Hulen, H.; Watson, R.

    1993-09-01

    The National Storage Laboratory (NSL) was organized to develop, demonstrate and commercialize technology for the storage system that will be the future repositories for our national information assets. Within the NSL four Department of Energy laboratories and IBM Federal System Company have pooled their resources to develop an entirely new High Performance Storage System (HPSS). The HPSS project concentrates on scalable parallel storage system for highly parallel computers as well as traditional supercomputers and workstation clusters. Concentrating on meeting the high end of storage system and data management requirements, HPSS is designed using network-connected storage devices to transfer data at rates of 100 million bytes per second and beyond. The resulting products will be portable to many vendor`s platforms. The three year project is targeted to be complete in 1995. This paper provides an overview of the requirements, design issues, and architecture of HPSS, as well as a description of the distributed, multi-organization industry and national laboratory HPSS project.

  4. High performance satellite networks

    NASA Astrophysics Data System (ADS)

    Helm, Neil R.; Edelson, Burton I.

    1997-06-01

    The high performance satellite communications networks of the future will have to be interoperable with terrestrial fiber cables. These satellite networks will evolve from narrowband analogue formats to broadband digital transmission schemes, with protocols, algorithms and transmission architectures that will segment the data into uniform cells and frames, and then transmit these data via larger and more efficient synchronous optional (SONET) and asynchronous transfer mode (ATM) networks that are being developed for the information "superhighway". These high performance satellite communications and information networks are required for modern applications, such as electronic commerce, digital libraries, medical imaging, distance learning, and the distribution of science data. In order for satellites to participate in these information superhighway networks, it is essential that they demonstrate their ability to: (1) operate seamlessly with heterogeneous architectures and applications, (2) carry data at SONET rates with the same quality of service as optical fibers, (3) qualify transmission delay as a parameter not a problem, and (4) show that satellites have several performance and economic advantages over fiber cable networks.

  5. High Performance Window Retrofit

    SciTech Connect

    Shrestha, Som S; Hun, Diana E; Desjarlais, Andre Omer

    2013-12-01

    The US Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and Traco partnered to develop high-performance windows for commercial building that are cost-effective. The main performance requirement for these windows was that they needed to have an R-value of at least 5 ft2 F h/Btu. This project seeks to quantify the potential energy savings from installing these windows in commercial buildings that are at least 20 years old. To this end, we are conducting evaluations at a two-story test facility that is representative of a commercial building from the 1980s, and are gathering measurements on the performance of its windows before and after double-pane, clear-glazed units are upgraded with R5 windows. Additionally, we will use these data to calibrate EnergyPlus models that we will allow us to extrapolate results to other climates. Findings from this project will provide empirical data on the benefits from high-performance windows, which will help promote their adoption in new and existing commercial buildings. This report describes the experimental setup, and includes some of the field and simulation results.

  6. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  7. High Performance Network Monitoring

    SciTech Connect

    Martinez, Jesse E

    2012-08-10

    Network Monitoring requires a substantial use of data and error analysis to overcome issues with clusters. Zenoss and Splunk help to monitor system log messages that are reporting issues about the clusters to monitoring services. Infiniband infrastructure on a number of clusters upgraded to ibmon2. ibmon2 requires different filters to report errors to system administrators. Focus for this summer is to: (1) Implement ibmon2 filters on monitoring boxes to report system errors to system administrators using Zenoss and Splunk; (2) Modify and improve scripts for monitoring and administrative usage; (3) Learn more about networks including services and maintenance for high performance computing systems; and (4) Gain a life experience working with professionals under real world situations. Filters were created to account for clusters running ibmon2 v1.0.0-1 10 Filters currently implemented for ibmon2 using Python. Filters look for threshold of port counters. Over certain counts, filters report errors to on-call system administrators and modifies grid to show local host with issue.

  8. High performance sapphire windows

    NASA Technical Reports Server (NTRS)

    Bates, Stephen C.; Liou, Larry

    1993-01-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  9. Simplified flangeless unisex waveguide coupler assembly

    DOEpatents

    Michelangelo, D.; Moeller, C.P.

    1993-05-04

    A unisex coupler assembly is disclosed capable of providing a leak tight coupling for waveguides with axial alignment of the waveguides and rotational capability. The sealing means of the coupler assembly are not exposed to RF energy, and the coupler assembly does not require the provision of external flanges on the waveguides. In a preferred embodiment, O ring seals are not used and the coupler assembly is, therefore, bakeable at a temperature up to about 150 C. The coupler assembly comprises a split collar which clamps around the waveguides and a second collar which fastens to the split collar. The split collar contains an inner annular groove. Each of the waveguides is provided with an external annular groove which receives a retaining ring. The split collar is clamped around one of the waveguides with the inner annular groove of the split collar engaging the retaining ring carried in the external annular groove in the waveguide. The second collar is then slipped over the second waveguide behind the annular groove and retaining ring therein and the second collar is coaxially secured by fastening means to the split collar to draw the respective waveguides together by coaxial force exerted by the second collar against the retaining ring on the second waveguide. A sealing ring is placed against an external sealing surface at a reduced external diameter end formed on one waveguide to sealingly engage a corresponding sealing surface on the other waveguide as the waveguides are urged toward each other.

  10. Simplified flangeless unisex waveguide coupler assembly

    DOEpatents

    Michelangelo, Dimartino; Moeller, Charles P.

    1993-01-01

    A unisex coupler assembly is disclosed capable of providing a leak tight coupling for waveguides with axial alignment of the waveguides and rotational capability. The sealing means of the coupler assembly are not exposed to RF energy, and the coupler assembly does not require the provision of external flanges on the waveguides. In a preferred embodiment, O ring seals are not used and the coupler assembly is, therefore, bakeable at a temperature up to about 150.degree. C. The coupler assembly comprises a split collar which clamps around the waveguides and a second collar which fastens to the split collar. The split collar contains an inner annular groove. Each of the waveguides is provided with an external annular groove which receives a retaining ring. The split collar is clamped around one of the waveguides with the inner annular groove of the split collar engaging the retaining ring carried in the external annular groove in the waveguide. The second collar is then slipped over the second waveguide behind the annular groove and retaining ring therein and the second collar is coaxially secured by fastening means to the split collar to draw the respective waveguides together by coaxial force exerted by the second collar against the retaining ring on the second waveguide. A sealing ring is placed against an external sealing surface at a reduced external diameter end formed on one waveguide to sealingly engage a corresponding sealing surface on the other waveguide as the waveguides are urged toward each other.

  11. Evaluation of high-performance computing software

    SciTech Connect

    Browne, S.; Dongarra, J.; Rowan, T.

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  12. Commoditization of High Performance Storage

    SciTech Connect

    Studham, Scott S.

    2004-04-01

    The commoditization of high performance computers started in the late 80s with the attack of the killer micros. Previously, high performance computers were exotic vector systems that could only be afforded by an illustrious few. Now everyone has a supercomputer composed of clusters of commodity processors. A similar commoditization of high performance storage has begun. Commodity disks are being used for high performance storage, enabling a paradigm change in storage and significantly changing the price point of high volume storage.

  13. High power couplers for Project X

    SciTech Connect

    Kazakov, S.; Champion, M.S.; Yakovlev, V.P.; Kramp, M.; Pronitchev, O.; Orlov, Y.; /Fermilab

    2011-03-01

    Project X, a multi-megawatt proton source under development at Fermi National Accelerator Laboratory. The key element of the project is a superconducting (SC) 3GV continuous wave (CW) proton linac. The linac includes 5 types of SC accelerating cavities of two frequencies.(325 and 650MHz) The cavities consume up to 30 kW average RF power and need proper main couplers. Requirements and approach to the coupler design are discussed in the report. New cost effective schemes are described. Results of electrodynamics and thermal simulations are presented.

  14. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-01

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection.

  15. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    NASA Technical Reports Server (NTRS)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  16. Comparative Simulation Studies of Multipacting in Higher-Order-Mode Couplers of Superconducting RF Cavities

    SciTech Connect

    Li, Y. M.; Liu, Kexin; Geng, Rongli

    2014-02-01

    Multipacting (MP) in higher-order-mode (HOM) couplers of the International Linear Collider (ILC) baseline cavity and the Continuous Electron Beam Accelerator Facility (CEBAF) 12 GeV upgrade cavity is studied by using the ACE3P suites, developed by the Advanced Computations Department at SLAC. For the ILC cavity HOM coupler, the simulation results show that resonant trajectories exist in three zones, corresponding to an accelerating gradient range of 0.6-1.6 MV/m, 21-34 MV/m, 32-35 MV/m, and > 40MV/m, respectively. For the CEBAF 12 GeV upgrade cavity HOM coupler, resonant trajectories exist in one zone, corresponding to an accelerating gradient range of 6-13 MV/m. Potential implications of these MP barriers are discussed in the context of future high energy pulsed as well as medium energy continuous wave (CW) accelerators based on superconducting radio frequency cavities. Frequency scaling of MP's predicted in HOM couplers of the ILC, CBEAF upgrade, SNS and FLASH third harmonic cavity is given and found to be in good agreement with the analytical result based on the parallel plate model.

  17. High Performance Fortran for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zima, Hans; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    This paper focuses on the use of High Performance Fortran (HPF) for important classes of algorithms employed in aerospace applications. HPF is a set of Fortran extensions designed to provide users with a high-level interface for programming data parallel scientific applications, while delegating to the compiler/runtime system the task of generating explicitly parallel message-passing programs. We begin by providing a short overview of the HPF language. This is followed by a detailed discussion of the efficient use of HPF for applications involving multiple structured grids such as multiblock and adaptive mesh refinement (AMR) codes as well as unstructured grid codes. We focus on the data structures and computational structures used in these codes and on the high-level strategies that can be expressed in HPF to optimally exploit the parallelism in these algorithms.

  18. Time-Domain Simulation of RF Couplers

    SciTech Connect

    Smithe, David; Carlsson, Johan; Austin, Travis

    2009-11-26

    We have developed a finite-difference time-domain (FDTD) fluid-like approach to integrated plasma-and-coupler simulation [1], and show how it can be used to model LH and ICRF couplers in the MST and larger tokamaks.[2] This approach permits very accurate 3-D representation of coupler geometry, and easily includes non-axi-symmetry in vessel wall, magnetic equilibrium, and plasma density. The plasma is integrated with the FDTD Maxwell solver in an implicit solve that steps over electron time-scales, and permits tenuous plasma in the coupler itself, without any need to distinguish or interface between different regions of vacuum and/or plasma. The FDTD algorithm is also generalized to incorporate a time-domain sheath potential [3] on metal structures within the simulation, to look for situations where the sheath potential might generate local sputtering opportunities. Benchmarking of the time-domain sheath algorithm has been reported in the references. Finally, the time-domain software [4] permits the use of particles, either as field diagnostic (test particles) or to self-consistently compute plasma current from the applied RF power.

  19. Twin laser 2x1 MMI coupler

    NASA Astrophysics Data System (ADS)

    de Pedraza, M. L.

    2005-07-01

    In previous studies, it was shown that using a Y waveguide, a twin laser output signal could be mixed and coupled to a fiber. The need to adapt the dimensions of the Y waveguide and apply the more restrictive conditions of a coherent regime for laser emission and waveguide mixing, led us to try an MMI coupler to focus the output signal. Herein, ideal 2x1 MMI for this purpose are presented in schematic form. Using a TE mode approximated with Gaussian distributions for the twin laser output signal (the input signal to the MMI coupler), an optimally focused output signal requirement is considered. Possible longitudinal and width dimensions for the couplers are calculated. Similar values of the MMI refraction index to the laser magnitude values were assumed to avoid the drop in transmission produced by reflections at the boundary surface. We also assumed no air gap between the laser and MMI coupler. The functioning of these ideal devices for coherent and incoherent twin laser emission is discussed.

  20. High efficiency germanium-assisted grating coupler.

    PubMed

    Yang, Shuyu; Zhang, Yi; Baehr-Jones, Tom; Hochberg, Michael

    2014-12-15

    We propose a fiber to submicron silicon waveguide vertical coupler utilizing germanium-on-silicon gratings. The germanium is epitaxially grown on silicon in the same step for building photodetectors. Coupling efficiency based on FDTD simulation is 76% at 1.55 µm and the optical 1dB bandwidth is 40 nm.

  1. High Performance Thin Layer Chromatography.

    ERIC Educational Resources Information Center

    Costanzo, Samuel J.

    1984-01-01

    Clarifies where in the scheme of modern chromatography high performance thin layer chromatography (TLC) fits and why in some situations it is a viable alternative to gas and high performance liquid chromatography. New TLC plates, sample applications, plate development, and instrumental techniques are considered. (JN)

  2. (abstract) The Design of a Benign Fail-safe Mechanism Using a Low-melting-point Metal Alloy Coupler

    NASA Technical Reports Server (NTRS)

    Blomquist, Richard S.

    1995-01-01

    Because the alpha proton X ray spectrometer (APXS) sensor head on the Mars Pathfinder rover, Sojourner, is placed on Martian soil by the deployment mechanism (ADM), the rover would be crippled if the actuator fails when the mechanism is in its deployed position, as rover ground clearance is then reduced to zero. This paper describes the unique fail-safe mounted on the ADM, especially the use of a low-temperature-melting alloy as a coupler device. The final form of the design is a low-melting-point metal pellet coupler, made from Cerrobend, in parallel with a Negator spring pack. In its solid state, the metal rigidly connects the driver (the actuator) and the driven part (the mechanism). When commanded, a strip heater wrapped around the coupler melts the metal pellet (at 60(deg)C), allowing the driven part to turn independent of the driver. The Negator spring retracts the mechanism to its fully stowed position. This concept meets all the design criteria, and provides an added benefit. When the metal hardens the coupler once again rigidly connects the actuator and the mechanism. The concept presented here can easily be applied to other applications. Anywhere release devices are needed, low-melting-point couplers can be considered. The issues to be concerned with are thermal isolation, proper setting of the parts before actuation, and possible outgassing concerns. However, when these issues are overcome, the resulting release mechanism can promise to be the most light, simple, power conserving alternative available.

  3. Single-mode fiber linearly tapered planar waveguide tunable coupler

    NASA Astrophysics Data System (ADS)

    Das, Alok K.; Hussain, Anwar

    1997-09-01

    We developed a simple system of tunable fiber film coupler using a linearly tapered thin-film planar waveguide (PWG) evanescently coupled by a single-mode distributed fiber half-coupler. We investigate the characteristics of the coupler theoretically and experimentally taking into consideration the refractive index ( n f ) of nonuniform films, the magnitude of nonuniformity ( m ) of the films, and the source wavelength ( ). The thickness variation of the nonuniform film is along the direction of propagation of optical power. Tapered and plano concave thin films of a mix of oils as well as a plano concave poly(methyl methacrylate) film were fabricated to serve as nonuniform PWG s. Similar to single-mode fiber with a uniform thickness PWG coupler, such a coupler also provides light modulation with a change of n f . However, position shifting of a half-coupler in a tapered PWG structure along the direction of propagation exhibits the variation of fiber throughput power. This action serves as a simple system for a tunable fiber film coupler. Wavelength-dependent throughput fiber power for such a coupler also behaves as a filter. The center wavelength can be controlled by shifting the position of the half-coupler. A coupling fiber as a half-coupler can be used for efficient coupling. We performed a theoretical analysis of the structure using Marcuse s model and observed good agreement with the experimental results.

  4. Coupler rotation behaviour and its effect on heavy haul trains

    NASA Astrophysics Data System (ADS)

    Xu, Z. Q.; Ma, W. H.; Wu, Q.; Luo, S. H.

    2013-12-01

    When a locomotive coupler rotates at an angle, the lateral component of the coupler force has an adverse effect on the locomotive's safety, particularly in heavy haul trains. In this paper, a model of a head-mid configuration, a 20,000-t heavy haul train is developed to analyse the rotation behaviour of the locomotive's coupler system and its effect on the dynamic behaviour of such a train's middle locomotive when operating on tangent and curved tracks. The train model includes detailed coupler and draft gear with which to consider the hysteretic characteristics of the rubber draft gear model, the friction characteristics of the coupler knuckles, and the alignment-control characteristics of the coupler shoulder. The results indicate that the coupler's rotation behaviour differs between the tangent and curved tracks, significantly affecting the locomotive's running performance under the braking condition. A larger coupler rotation angle generates a larger lateral component, which increases the wheelset's lateral force and the derailment coefficient. Decreasing the maximum coupler free angle can improve the locomotive's operational performance and safety. Based on these results, the recommended maximum coupler free angle is 4°.

  5. High performance flight simulation at NASA Langley

    NASA Technical Reports Server (NTRS)

    Cleveland, Jeff I., II; Sudik, Steven J.; Grove, Randall D.

    1992-01-01

    The use of real-time simulation at the NASA facility is reviewed specifically with regard to hardware, software, and the use of a fiberoptic-based digital simulation network. The network hardware includes supercomputers that support 32- and 64-bit scalar, vector, and parallel processing technologies. The software include drivers, real-time supervisors, and routines for site-configuration management and scheduling. Performance specifications include: (1) benchmark solution at 165 sec for a single CPU; (2) a transfer rate of 24 million bits/s; and (3) time-critical system responsiveness of less than 35 msec. Simulation applications include the Differential Maneuvering Simulator, Transport Systems Research Vehicle simulations, and the Visual Motion Simulator. NASA is shown to be in the final stages of developing a high-performance computing system for the real-time simulation of complex high-performance aircraft.

  6. Cryostat for testing RF power couplers

    SciTech Connect

    Kuchnir, M.; Champion, M.S.; Koepke, K.P.; Misek, J.R.

    1996-03-01

    Similar to the power leads of accelerator superconducting magnets, the power couplers of accelerator superconducting cavities are components that link room temperature to superfluid helium temperature for the purpose of energy transfer. Instead of conducting kiloamperes of current they guide megawatts of RF power between those two temperatures. In this paper we describe a cryostat designed for testing the performance of these components and measuring their heat loads. A special feature of this cryostat is its minimum liquid inventory that considerably simplifies safety related requirements. This cryostat is part of a Fermilab facility contributing to the international collaboration working on TESLA (TeV Electron Superconducting Linear Accelerator). This facility is now operational and we will be presenting specifications as well as performance data on the cryostat as well as the first pair of power couplers tested with it.

  7. Optically controlled quadrature coupler on silicon substrate

    NASA Astrophysics Data System (ADS)

    Bhadauria, Avanish; Sharma, Sonia; Sonania, Shikha; Akhtar, Jamil

    2016-03-01

    In this paper, we have proposed and studied an optically controlled quadrature coupler fabricated on silicon substrate. The optically controlled quadrature coupler can be realized by terminating its coupled or through ports by optically induced load. Simulation and experimental results show that by varying optical intensity, we can control the phase and amplitude of output RF signal and can realize optically controlled reflection type attenuator, reflection type phase-shifter and ultrafast switches. The new kind of proposed device can be useful for ultra-fast signal processing and modulation schemes in high speed communication especially in QPSK modulation. The optical control has several advantages over conventional techniques such as MEMS and other semiconductor switching, which have several inherent disadvantages and limitations like low response time, low power handling capacity, device parasitic and non-linearity.

  8. High Power Co-Axial Coupler

    SciTech Connect

    Johnson, Rolland; Neubauer, Michael

    2013-08-14

    A superconducting RF (SRF) power coupler capable of handling 500 kW CW RF power at 750 MHz is required for present and future storage rings and linacs. There are over 35 coupler designs for SRF cavities ranging in frequency from 325 to 1500 MHz. Coupler windows vary from cylinders to cones to disks and RF power couplers will always be limited by the ability of ceramic windows and their matching systems to withstand the stresses due to non-uniform heating from dielectric and wall losses, multipactor, and mechanical flexure. In the Phase II project, we built a double window coaxial system with materials that would not otherwise be useable due to individual VSWRs. Double window systems can be operated such that one is cold (LN2) and one is warm. They can have different materials and still have a good match without using matching elements that create problematic multipactor bands. The match of the two windows will always result from the cancellation of the two window’s reflections when they are located approximately a quarter wavelength apart or multiples of a quarter wavelength. The window assemblies were carefully constructed to put the window material and its braze joint in compression at all times. This was done using explosion bonding techniques which allow for inexpensive fabrication of the vacuum / compression ring out of stainless steel with copper plating applied to the inner surface. The EIA 3-1/8” double window assembly was then successfully baked out and tested to 12 kW in a 3-1/8” co-axial system. The thermal gradient across the window was measured to be 90 C which represents about 15 ksi tensile stress in an uncompressed window. In our design the compression was calculated to be about 25 ksi, so the net compressive force was 5 ksi at full power.

  9. Integrated-optical directional coupler biosensor

    NASA Astrophysics Data System (ADS)

    Luff, B. J.; Harris, R. D.; Wilkinson, J. S.; Wilson, R.; Schiffrin, D. J.

    1996-04-01

    We present measurements of biomolecular binding reactions, using a new type of integrated-optical biosensor based on a planar directional coupler structure. The device is fabricated by Ag+ - Na+ ion exchange in glass, and definition of the sensing region is achieved by use of transparent fluoropolymer isolation layers formed by thermal evaporation. The suitability of the sensor for application to the detection of environmental pollutants is considered.

  10. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A

    2007-02-05

    The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

  11. Directional multimode coupler for planar magnonics: Side-coupled magnetic stripes

    SciTech Connect

    Sadovnikov, A. V. Nikitov, S. A.; Beginin, E. N.; Sheshukova, S. E.; Romanenko, D. V.; Sharaevskii, Yu. P.

    2015-11-16

    We experimentally demonstrate spin waves coupling in two laterally adjacent magnetic stripes. By the means of Brillouin light scattering spectroscopy, we show that the coupling efficiency depends both on the magnonic waveguides' geometry and the characteristics of spin-wave modes. In particular, the lateral confinement of coupled yttrium-iron-garnet stripes enables the possibility of control over the spin-wave propagation characteristics. Numerical simulations (in time domain and frequency domain) reveal the nature of intermodal coupling between two magnonic stripes. The proposed topology of multimode magnonic coupler can be utilized as a building block for fabrication of integrated parallel functional and logic devices such as the frequency selective directional coupler or tunable splitter, enabling a number of potential applications for planar magnonics.

  12. A Ratiometric Wavelength Measurement Based on a Silicon-on-Insulator Directional Coupler Integrated Device

    PubMed Central

    Wang, Pengfei; Hatta, Agus Muhamad; Zhao, Haoyu; Zheng, Jie; Farrell, Gerald; Brambilla, Gilberto

    2015-01-01

    A ratiometric wavelength measurement based on a Silicon-on-Insulator (SOI) integrated device is proposed and designed, which consists of directional couplers acting as two edge filters with opposite spectral responses. The optimal separation distance between two parallel silicon waveguides and the interaction length of the directional coupler are designed to meet the desired spectral response by using local supermodes. The wavelength discrimination ability of the designed ratiometric structure is demonstrated by a beam propagation method numerically and then is verified experimentally. The experimental results have shown a general agreement with the theoretical models. The ratiometric wavelength system demonstrates a resolution of better than 50 pm at a wavelength around 1550 nm with ease of assembly and calibration. PMID:26343668

  13. A Third Generation Lower Hybrid Coupler

    SciTech Connect

    S. Bernabei; J. Hosea; C. Kung; D. Loesser; J. Rushinski; J.R. Wilson; R. Parker

    2001-12-05

    The Princeton Plasma Physics Laboratory (PPPL) and the Massachusetts Institute of Technology (MIT) are preparing an experiment of current profile control using lower-hybrid waves in order to produce and sustain advanced tokamak regimes in steady-state conditions in Alcator C-Mod. Unlike JET's, ToreSupra's and JT60's couplers, the C-Mod lower-hybrid coupler does not employ the now conventional multijunction design, but will have similar characteristics, compactness, and internal power division while retaining full control of the antenna element phasing. This is achieved by using 3 dB vertical power splitters and a stack of laminated plates with the waveguides milled in them. Construction is simplified and allows easy control and maintenance of all parts. Many precautions are taken to avoid arcing. Special care is also taken to avoid the recycling of reflected power which could affect the coupling and the launched n(subscript ||) spectrum. The results from C-Mod should allow further simplification in the designs of the coupler planned for KSTAR (Korea Superconducting Tokamak Advanced Research) and ITER (International Thermonuclear Experimental Reactor).

  14. High Power Co-Axial SRF Coupler

    SciTech Connect

    M.L. Neubauer, R.A. Rimmer

    2009-05-01

    There are over 35 coupler designs for SRF cavities ranging in frequency from 325 to 1500 MHz. Two-thirds of these designs are coaxial couplers using disk or cylindrical ceramics in various combinations and configurations. While it is well known that dielectric losses go down by several orders of magnitude at cryogenic temperatures, it not well known that the thermal conductivity also goes down, and it is the ratio of thermal conductivity to loss tangent (SRF ceramic Quality Factor) and ceramic volume which will determine the heat load of any given design. We describe a novel robust co-axial SRF coupler design which uses compressed window technology. This technology will allow the use of highly thermally conductive materials for cryogenic windows. The mechanical designs will fit into standard-sized ConFlat® flanges for ease of assembly. Two windows will be used in a coaxial line. The distance between the windows is adjusted to cancel their reflections so that the same window can be used in many different applications at various frequencies.

  15. Improved input and output couplers for SC acceleration structure

    SciTech Connect

    Solyak, N.; Gonin, I.; Latina, A.; Lunin, A.; Poloubotko, V.; Yakovlev, V.; /Fermilab

    2009-04-01

    Different couplers are described that allow the reduction of both transverse wake potential and RF kick in the SC acceleration structure of ILC. A simple rotation of the couplers reducing the RF kick and transverse wake kick is discussed for both the main linac and bunch compressors, along with possible limitations of this method. Designs of a coupler unit are presented which preserve axial symmetry of the structure, and provide reduced both the RF kick and transverse wake field.

  16. Tapered Velocity Couplers and Devices: a Treatise

    NASA Astrophysics Data System (ADS)

    Kim, Hyoun Soo

    A polarization independent device is highly desirable for use in single-mode fiber optical communication systems. Tapered velocity coupler (TVC) is expected to play an important role since its operation is polarization independent as well as wavelength insensitive. Thus far, TVC has received little attention primarily because of the unusually long device length required for complete power transfer. In this dissertation we establish that a TVC with an acceptable device length for integration can be indeed realized and integrated by tapering in index as well as in dimension. We demonstrate, for the first time, that complete power transfer can be achieved in a tapered, both in index and in dimension, velocity coupler in Ti:LiNbO _3 with device length reduced to one quarter of that of conventional TVC. The coupler is analyzed by use of step transition model in conjunction with local normal modes of the grade index TVC, overcoming the deficiency of the five-layer step index model. We further demonstrate a Ti:LiNbO_3 digital optical switch with the smallest voltage length product reported to date, namely, 7.2 Vcm for TM and 24 Vcm TE mode with a 15 dB cross talk. In an effort to extend the tapered, both in index and in dimension, velocity coupler concepts to step index compound semiconductor waveguides, we introduce proton exchanged periodically segmented (PEPS) waveguides. PEPS waveguides in LiNbO_3 are first studied theoretically and experimentally. The mode index of PEPS waveguides increases linearly and saturates finally with increase of duty cycle. Next, segmented waveguides in AlGaAs/GaAs are characterized in terms of propagation loss and modal size with respect to duty cycle. These segmented waveguides will be utilized in the development of step index tapered velocity couplers. Finally, we present an application for TVC as an optical interconnect. In particular, a tapered waveguide interconnect between a single quantum well (SQW) laser and a multi-quantum well

  17. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In this tutorial, we will discuss top five current RISC microprocessors: The IBM Power2, which is used in the IBM RS6000/590 workstation and in the IBM SP2 parallel supercomputer, the DEC Alpha, which is in the DEC Alpha workstation and in the Cray T3D; the MIPS R8000, which is used in the SGI Power Challenge; the HP PA-RISC 7100, which is used in the HP 700 series workstations and in the Convex Exemplar; and the Cray proprietary processor, which is used in the new Cray J916. The architecture of these microprocessors will first be presented. The effective performance of these processors will then be compared, both by citing standard benchmarks and also in the context of implementing a real applications. In the process, different programming models such as data parallel (CM Fortran and HPF) and message passing (PVM and MPI) will be introduced and compared. The latest NAS Parallel Benchmark (NPB) absolute performance and performance per dollar figures will be presented. The next generation of the NP13 will also be described. The tutorial will conclude with a discussion of general trends in the field of high performance computing, including likely future developments in hardware and software technology, and the relative roles of vector supercomputers tightly coupled parallel computers, and clusters of workstations. This tutorial will provide a unique cross-machine comparison not available elsewhere.

  18. Climate Modeling using High-Performance Computing

    SciTech Connect

    Mirin, A A; Wickett, M E; Duffy, P B; Rotman, D A

    2005-03-03

    The Center for Applied Scientific Computing (CASC) and the LLNL Atmospheric Science Division (ASD) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. As part of LLNL's participation in DOE's Scientific Discovery through Advanced Computing (SciDAC) program, members of CASC and ASD are collaborating with other DOE labs and NCAR in the development of a comprehensive, next-generation global climate model. This model incorporates the most current physics and numerics and capably exploits the latest massively parallel computers. One of LLNL's roles in this collaboration is the scalable parallelization of NASA's finite-volume atmospheric dynamical core. We have implemented multiple two-dimensional domain decompositions, where the different decompositions are connected by high-speed transposes. Additional performance is obtained through shared memory parallelization constructs and one-sided interprocess communication. The finite-volume dynamical core is particularly important to atmospheric chemistry simulations, where LLNL has a leading role.

  19. Optimization of unipolar magnetic couplers for EV wireless power chargers

    NASA Astrophysics Data System (ADS)

    Zeng, H.; Liu, Z. Z.; Chen, H. X.; Zhou, B.; Hei, T.

    2016-08-01

    In order to improve the coupling coefficient of EV wireless power chargers, it's important to optimize the magnetic couplers. To improve the coupling coefficient, the relationship between coupling coefficient and efficiency is derived, and the expression of coupling coefficient based on magnetic circuit is deduced, which provide the basis for optimizing the couplers. By 3D FEM simulation, the optimal core structure and coils are designed for unipolar circular couplers. Experiments are designed to verify the correctness of the optimization results, and compared with previous coupler, the transmission efficiency is improved and weight is reduced.

  20. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1998-08-11

    A digital system provides sending and receiving gateways for HIPPI interfaces. Electronic logic circuitry formats data signals and overhead signals in a data frame that is suitable for transmission over a connecting fiber optic link. Multiplexers route the data and overhead signals to a framer module. The framer module allocates the data and overhead signals to a plurality of 9-byte words that are arranged in a selected protocol. The formatted words are stored in a storage register for output through the gateway.

  1. High-Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    Reuhs, Bradley L.; Rounds, Mary Ann

    High-performance liquid chromatography (HPLC) developed during the 1960s as a direct offshoot of classic column liquid chromatography through improvements in the technology of columns and instrumental components (pumps, injection valves, and detectors). Originally, HPLC was the acronym for high-pressure liquid chromatography, reflecting the high operating pressures generated by early columns. By the late 1970s, however, high-performance liquid chromatography had become the preferred term, emphasizing the effective separations achieved. In fact, newer columns and packing materials offer high performance at moderate pressure (although still high pressure relative to gravity-flow liquid chromatography). HPLC can be applied to the analysis of any compound with solubility in a liquid that can be used as the mobile phase. Although most frequently employed as an analytical technique, HPLC also may be used in the preparative mode.

  2. INL High Performance Building Strategy

    SciTech Connect

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  3. The path toward HEP High Performance Computing

    NASA Astrophysics Data System (ADS)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  4. Active polarization coupler for birefringent fiber

    NASA Astrophysics Data System (ADS)

    Brooks, J. L.; Youngquist, R. C.; Kino, G. S.; Shaw, H. J.

    1984-06-01

    Static coupling between polarization modes achieved by periodically stressing birefringent fiber once per beat length was recently reported. The same scheme is now used to obtain coupling modulation at kilohertz-to-megahertz frequencies by applying pressure to the fiber with an oscillating piezoelectric ceramic. An amplitude of 30-50 V (peak to peak) was found to be necessary to modulate the polarization coupling from a minimum to a maximum. Polarization modulation is also achieved by applying stress along one fiber polarization axis between the two static couplers of a Mach-Zehnder interferometer.

  5. High Performance Bulk Thermoelectric Materials

    SciTech Connect

    Ren, Zhifeng

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  6. High-Performance Ball Bearing

    NASA Technical Reports Server (NTRS)

    Bursey, Roger W., Jr.; Haluck, David A.; Olinger, John B.; Owen, Samuel S.; Poole, William E.

    1995-01-01

    High-performance bearing features strong, lightweight, self-lubricating cage with self-lubricating liners in ball apertures. Designed to operate at high speed (tens of thousands of revolutions per minute) in cryogenic environment like liquid-oxygen or liquid-hydrogen turbopump. Includes inner race, outer race, and cage keeping bearing balls equally spaced.

  7. TTF3 power coupler thermal analysis for LCLS-II CW operation

    SciTech Connect

    Xiao, L.; Adolphsen, C.; Li, Z.; Nantista, C.; Raubenheimer, T.; Solyak, N.; Gonin, I.

    2015-05-13

    The TESLA 9-cell SRF cavity design has been adopted for use in the LCLS-II SRF Linac. Its TTF3 coaxial fundamental power coupler (FPC), optimized for pulsed operation in European XFEL and ILC, requires modest changes to make it suitable for LCLS-II continuous-wave (CW) operation. For LCLS-II it must handle up to 7 kW of power, fully reflected, with the maximum temperature around 450 K, the coupler bake temperature. In order to improve TTF3 FPC cooling, an increased copper plating thickness will be used on the inner conductor of the ‘warm’ section of the coupler. Also, the antenna will be shortened to achieve higher cavity Qext values. Fully 3D FPC thermal analysis has been performed using the SLAC-developed parallel finite element code suite ACE3P, which includes electromagnetic codes and an integrated electromagnetic, thermal and mechanical multi-physics code. In this paper, we present TTF3 FPC thermal analysis simulation results obtained using ACE3P as well as a comparison with measurement results.

  8. An Over-moded Fundamental Power Coupler for the ILC

    SciTech Connect

    Jeff Neilson

    2009-05-20

    The current design of fundamental power couplers for the ILC are expensive and require excessively long conditioning times. The goal of this develoment is design of a coupler that requires little rf processing and is significantly less expensive to build than the present ILC coupler. The goal of this program is development of a new technology for power couplers.This new technology is based on the cylindrical TE01 mode and other over-moded technologies developed for the X-band rf distribution system of the NCLTA. During the Phase I program, a TE10 to TE01 mode transducer suitable for use as a part of a power coupler in the ILC will be designed, built and tested. Following a succesful test, prototype designs of the TE01 to cavity coupler and thermal will be produced. A detailed study of the suitability of this overmoded waveguide technology for the ILC power coupler will be provided in the final report. Development of over-moded power couplers for superconducting cavities could find application im many world-wide accelerator projects, such as SNS, Jefferson Lab upgrade, RIA, TESLA in addition to the ILC.

  9. Evaluation of GPFS Connectivity Over High-Performance Networks

    SciTech Connect

    Srinivasan, Jay; Canon, Shane; Andrews, Matthew

    2009-02-17

    We present the results of an evaluation of new features of the latest release of IBM's GPFS filesystem (v3.2). We investigate different ways of connecting to a high-performance GPFS filesystem from a remote cluster using Infiniband (IB) and 10 Gigabit Ethernet. We also examine the performance of the GPFS filesystem with both serial and parallel I/O. Finally, we also present our recommendations for effective ways of utilizing high-bandwidth networks for high-performance I/O to parallel file systems.

  10. New HOM coupler design for high current SRF cavity

    SciTech Connect

    Xu, W.; Ben-Zvi, I.; Belomestnykh, S.; Hahn, H.; Johnson, E.

    2011-03-28

    Damping higher order modes (HOMs) significantly to avoid beam instability is a challenge for the high current Energy Recovery Linac-based eRHIC at BNL. To avoid the overheating effect and high tuning sensitivity, current, a new band-stop HOM coupler is being designed at BNL. The new HOM coupler has a bandwidth of tens of MHz to reject the fundamental mode, which will avoid overheating due to fundamental frequency shifting because of cooling down. In addition, the S21 parameter of the band-pass filter is nearly flat from first higher order mode to 5 times the fundamental frequency. The simulation results showed that the new couplers effectively damp HOMs for the eRHIC cavity with enlarged beam tube diameter and 2 120{sup o} HOM couplers at each side of cavity. This paper presents the design of HOM coupler, HOM damping capacity for eRHIC cavity and prototype test results.

  11. Free electron laser variable bridge coupler

    SciTech Connect

    Spalek, G.; Billen, J.H.; Garcia, J.A.; McMurry, D.E.; Harnsborough, L.D.; Giles, P.M.; Stevens, S.B.

    1985-01-01

    The Los Alamos free-electron laser (FEL) is being modified to test a scheme for recovering most of the power in the residual 20-MeV electron beam by decelerating the microbunches in a linear standing-wave accelerator and using the recovered energy to accelerate new beam. A variable-coupler low-power model that resonantly couples the accelerator and decelerator structures has been built and tested. By mixing the TE/sub 101/ and TE/sub 102/ modes, this device permits continuous variation of the decelerator fields relative to the accelerator fields through a range of 1:1 to 1:2.5. Phase differences between the two structures are kept below 1/sup 0/ and are independent of power-flow direction. The rf power is also fed to the two structures through this coupling device. Measurements were also made on a three-post-loaded variable coupler that is a promising candidate for the same task.

  12. High performance ammonium nitrate propellant

    NASA Technical Reports Server (NTRS)

    Anderson, F. A. (Inventor)

    1979-01-01

    A high performance propellant having greatly reduced hydrogen chloride emission is presented. It is comprised of: (1) a minor amount of hydrocarbon binder (10-15%), (2) at least 85% solids including ammonium nitrate as the primary oxidizer (about 40% to 70%), (3) a significant amount (5-25%) powdered metal fuel, such as aluminum, (4) a small amount (5-25%) of ammonium perchlorate as a supplementary oxidizer, and (5) optionally a small amount (0-20%) of a nitramine.

  13. New, high performance rotating parachute

    SciTech Connect

    Pepper, W.B. Jr.

    1983-01-01

    A new rotating parachute has been designed primarily for recovery of high performance reentry vehicles. Design and development/testing results are presented from low-speed wind tunnel testing, free-flight deployments at transonic speeds and tests in a supersonic wind tunnel at Mach 2.0. Drag coefficients of 1.15 based on the 2-ft diameter of the rotor have been measured in the wind tunnel. Stability of the rotor is excellent.

  14. High performance dielectric materials development

    NASA Technical Reports Server (NTRS)

    Piche, Joe; Kirchner, Ted; Jayaraj, K.

    1994-01-01

    The mission of polymer composites materials technology is to develop materials and processing technology to meet DoD and commercial needs. The following are outlined in this presentation: high performance capacitors, high temperature aerospace insulation, rationale for choosing Foster-Miller (the reporting industry), the approach to the development and evaluation of high temperature insulation materials, and the requirements/evaluation parameters. Supporting tables and diagrams are included.

  15. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  16. High-performance computing in seismology

    SciTech Connect

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  17. Integrated dual-mode 3 dB power coupler based on tapered directional coupler

    PubMed Central

    Luo, Yuchan; Yu, Yu; Ye, Mengyuan; Sun, Chunlei; Zhang, Xinliang

    2016-01-01

    A dual-mode 3 dB power coupler based on silicon-on-insulator platform for mode division multiplexing system is proposed and demonstrated. The device, which consists of a tapered directional coupler and two output bend waveguides, has a 50:50 coupling ratio around the wavelength of 1550 nm for both fundamental and first order transverse magnetic (TM0 and TM1) modes. Based on asymmetrical tapered structure, a short common coupling length of ~15.2 μm for both modes is realized by optimizing the width of the tapered waveguide. The measured insertion loss for both modes is less than 0.7 dB. The crosstalks are about −14.3 dB for TM0 mode and −18.1 dB for TM1 mode. PMID:27002747

  18. High temperature superconducting axial field magnetic coupler: realization and test

    NASA Astrophysics Data System (ADS)

    Belguerras, L.; Mezani, S.; Lubin, T.; Lévêque, J.; Rezzoug, A.

    2015-09-01

    Contactless torque transmission through a large airgap is required in some industrial applications in which hermetic isolation is necessary. This torque transmission usually uses magnetic couplers, whose dimension strongly depends on the airgap flux density. The use of high temperature superconducting (HTS) coils to create a strong magnetic field may constitute a solution to reduce the size of the coupler. It is also possible to use this coupler to replace a torque tube in transmitting the torque produced by a HTS motor to its load. This paper presents the detailed construction and tests of an axial field HTS magnetic coupler. Pancake coils have been manufactured from BSCCO tape and used in one rotor of the coupler. The second rotor is mainly composed of NdFeB permanent magnets. Several tests have been carried out showing that the constructed coupler is working properly. A 3D finite element (FE) model of the studied coupler has been developed. Airgap magnetic field and torque measurements have been carried out and compared to the FE results. It has been shown that the measured and the computed quantities are in satisfactory agreement.

  19. RF Input Power Couplers for High Current SRF Applications

    SciTech Connect

    Khan, V. F.; Anders, W.; Burrill, Andrew; Knobloch, Jens; Kugeler, Oliver; Neumann, Axel; Wang, Haipeng

    2014-12-01

    High current SRF technology is being explored in present day accelerator science. The bERLinPro project is presently being built at HZB to address the challenges involved in high current SRF machines with the goal of generating and accelerating a 100 mA electron beam to 50 MeV in continuous wave (cw) mode at 1.3 GHz. One of the main challenges in this project is that of handling the high input RF power required for the photo-injector as well as booster cavities where there is no energy recovery process. A high power co-axial input power coupler is being developed to be used for the photo-injector and booster cavities at the nominal beam current. The coupler is based on the KEK–cERL design and has been modified to minimise the penetration of the coupler tip in the beam pipe without compromising on beam-power coupling (Qext ~105). Herein we report on the RF design of the high power (115 kW per coupler, dual couplers per cavity) bERLinPro (BP) coupler along with initial results on thermal calculations. We summarise the RF conditioning of the TTF-III couplers (modified for cw operation) performed in the past at BESSY/HZB. A similar conditioning is envisaged in the near future for the low current SRF photo-injector and the bERLinPro main linac cryomodule.

  20. High performance electromagnetic simulation tools

    NASA Astrophysics Data System (ADS)

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  1. Image guide couplers with isotropic and anisotropic coupling elements

    NASA Astrophysics Data System (ADS)

    Kother, Dietmar; Wolff, Ingo

    1988-04-01

    An image guide coupler consisting of a dielectric slab between two conducting plates is proposed, with application to integrated mm-wave circuits. The use of absorber materials is shown to reduce the influence of radiation at the waveguide bends without significant loss of power, and a dielectric coupling element is shown to nearly eliminate the frequency dependence of the dielectric image guide couplers. Switching couplers with quasi-isotropic behavior can be made by adding a premagnetized ferrite slab to the dielectric coupling element.

  2. Inductive coupler for downhole components and method for making same

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Dahlgren, Scott; Sneddon, Cameron; Fox, Joe; Briscoe, Michael A.

    2006-10-03

    An inductive coupler for downhole components. The inductive coupler includes an annular housing having a recess defined by a bottom portion and two opposing side wall portions. At least one side wall portion includes a lip extending toward but not reaching the other side wall portion. A plurality of generally U-shaped MCEI segments, preferably comprised of ferrite, are disposed in the recess and aligned so as to form a circular trough. The coupler further includes a conductor disposed within the circular trough and a polymer filling spaces between the segments, the annular housing and the conductor.

  3. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  4. Automatic Energy Schemes for High Performance Applications

    SciTech Connect

    Sundriyal, Vaibhav

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  5. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  6. High performance storable propellant resistojet

    NASA Astrophysics Data System (ADS)

    Vaughan, C. E.

    1992-01-01

    From 1965 until 1985 resistojets were used for a limited number of space missions. Capability increased in stages from an initial application using a 90 W gN2 thruster operating at 123 sec specific impulse (Isp) to a 830 W N2H4 thruster operating at 305 sec Isp. Prior to 1985 fewer than 100 resistojets were known to have been deployed on spacecraft. Building on this base NASA embarked upon the High Performance Storable Propellant Resistojet (HPSPR) program to significantly advance the resistojet state-of-the-art. Higher performance thrusters promised to increase the market demand for resistojets and enable space missions requiring higher performance. During the program three resistojets were fabricated and tested. High temperature wire and coupon materials tests were completed. A life test was conducted on an advanced gas generator.

  7. High Performance Perovskite Solar Cells

    PubMed Central

    Tong, Xin; Lin, Feng; Wu, Jiang

    2015-01-01

    Perovskite solar cells fabricated from organometal halide light harvesters have captured significant attention due to their tremendously low device costs as well as unprecedented rapid progress on power conversion efficiency (PCE). A certified PCE of 20.1% was achieved in late 2014 following the first study of long‐term stable all‐solid‐state perovskite solar cell with a PCE of 9.7% in 2012, showing their promising potential towards future cost‐effective and high performance solar cells. Here, notable achievements of primary device configuration involving perovskite layer, hole‐transporting materials (HTMs) and electron‐transporting materials (ETMs) are reviewed. Numerous strategies for enhancing photovoltaic parameters of perovskite solar cells, including morphology and crystallization control of perovskite layer, HTMs design and ETMs modifications are discussed in detail. In addition, perovskite solar cells outside of HTMs and ETMs are mentioned as well, providing guidelines for further simplification of device processing and hence cost reduction.

  8. High performance magnetically controllable microturbines.

    PubMed

    Tian, Ye; Zhang, Yong-Lai; Ku, Jin-Feng; He, Yan; Xu, Bin-Bin; Chen, Qi-Dai; Xia, Hong; Sun, Hong-Bo

    2010-11-01

    Reported in this paper is two-photon photopolymerization (TPP) fabrication of magnetic microturbines with high surface smoothness towards microfluids mixing. As the key component of the magnetic photoresist, Fe(3)O(4) nanoparticles were carefully screened for homogeneous doping. In this work, oleic acid stabilized Fe(3)O(4) nanoparticles synthesized via high-temperature induced organic phase decomposition of an iron precursor show evident advantages in particle morphology. After modification with propoxylated trimethylolpropane triacrylate (PO(3)-TMPTA, a kind of cross-linker), the magnetic nanoparticles were homogeneously doped in acrylate-based photoresist for TPP fabrication of microstructures. Finally, a magnetic microturbine was successfully fabricated as an active mixing device for remote control of microfluids blending. The development of high quality magnetic photoresists would lead to high performance magnetically controllable microdevices for lab-on-a-chip (LOC) applications. PMID:20721411

  9. Accurate theoretical and experimental characterization of optical grating coupler.

    PubMed

    Fesharaki, Faezeh; Hossain, Nadir; Vigne, Sebastien; Chaker, Mohamed; Wu, Ke

    2016-09-01

    Periodic structures, acting as reflectors, filters, and couplers, are a fundamental building block section in many optical devices. In this paper, a three-dimensional simulation of a grating coupler, a well-known periodic structure, is conducted. Guided waves and leakage characteristics of an out-of-plane grating coupler are studied in detail, and its coupling efficiency is examined. Furthermore, a numerical calibration analysis is applied through a commercial software package on the basis of a full-wave finite-element method to calculate the complex propagation constant of the structure and to evaluate the radiation pattern. For experimental evaluation, an optimized grating coupler is fabricated using electron-beam lithography technique and plasma etching. An excellent agreement between simulations and measurements is observed, thereby validating the demonstrated method. PMID:27607706

  10. IET. Coupling station. Man holds flexible couplers to reactor Dolly ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    IET. Coupling station. Man holds flexible couplers to reactor Dolly and HTRE rig. Date: April 22, 1955. INEEL negative no. 55-1010 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID

  11. Loss and Periodic Coupling Effects in Dielectric Directional Couplers.

    NASA Astrophysics Data System (ADS)

    Youngquist, Robert Carl

    1984-12-01

    This dissertation is concerned with understanding the causes and effects of a new loss mechanism in dielectric directional couplers, namely dissimilar normal mode loss, as well as introducing a new class of all-fiber devices based on the periodic coupling of fiber modes. A formal introduction to coupled mode theory is developed from which directional couplers can be described by using linear propagation operators. Comparisons to the standard coupling matrix approach are made and examples are given. Theoretical arguments and experimental evidence are presented to demonstrate that the coupler modes usually have different losses. Dissimilar mode loss causes the relative phase between the light in the guides to be modified and prevents complete power transfer from occurring. Interferometers using such couplers will exhibit phase errors in their outputs and all-fiber resonators will display an asymmetry in their resonance peaks. In integrated optics lower limits are set on switching extinction ratios. It is shown that much of the analysis presented in the literature on three waveguide couplers is based on approximations that may not be valid in the regimes where the couplers are to be used. A three-waveguide coupler interferometer with dissimilar mode loss is studied and shown to have two independent outputs whose phases are environmentally insensitive to changes in coupler loss and power transfer. Uniform and periodic coupling functions are analyzed and it is shown that complete power transfer can occur when the period of the sinusoidal coupling matches the beat length between the coupled propagating waves. A birefringent fiber polarization coupler and a two-mode fiber modal coupler are demonstrated and evaluated. These compact and simple devices are used to fabricate all-fiber amplitude modulators, notch filters, in-line Mach Zehnder interferometers, and polarizers. Further applications include polarization controllers, signal processing operations such as fast word

  12. Image guide couplers used in millimeter wave integrated circuits

    NASA Astrophysics Data System (ADS)

    Qi, Lanfen; Xu, Liqun; Luo, Ye

    1988-12-01

    The odd-even mode principle and the effective dielectric constant method are used to explore the dispersion and coupling characteristics of coupled image guides. The design for an image guide directional coupler is discussed. It is suggested that 3-dB and 10-dB couplers in Ka band can be used to provide millimeter wave integrated circuits with flat coupling, mechanical stability, and low losses.

  13. A high-efficiency mode coupler autotracking feed

    NASA Astrophysics Data System (ADS)

    Cipolla, Frank; Seck, Gerry

    The design, construction, and installation of high-efficiency autotracking feeds using a tracking mode coupler at both S, C, and X band are presented. These feeds have shown greater than 65 percent efficiencies when mounted in a doubly shaped dual reflector antenna. The mode coupler feed attributes include high-efficiency in both the data and track channels, full waveguide bandwidth operation, good feed error gradients, high-power handling, and active cross talk correction.

  14. Power coupler kick of the TRIUMF ICM capture cavities

    NASA Astrophysics Data System (ADS)

    Yan, Fang; E. Laxdal, R.; Zvyagintsev, V.; Yu., Chao; C., Gong; Koscielniak, S.

    2011-06-01

    The TRIUMF Injector CryoModule (ICM) adapted two superconducting single cavities as the capture section for the low injecting energy of 100 keV electrons. Coupler kick induced beam deflection and projected emittance growth are one of the prime concerns of the beam stability, especially at low energies. In low energy applications, the electron velocity changes rapidly inside the cavity, which makes the numerical analysis much more complicated. The commonly used theoretical formulas of the direct integral or the Panofsky-Wenzel theorem is not suitable for the kick calculation of β < 1 electrons. Despite that, the above mentioned kick calculation method doesn't consider injecting electron energy, the beam offset due to the coupler kick may not be negligible because of the low injection energy even if the kick is optimized. Thus the beam dynamics code TRACK is used here for the simulation of the power coupler kick perturbation. The coupler kick can be compensated for by a judicious choice of the coupler position in successive cavities from upstream to downstream. The simulation shows that because of the adiabatic damping by the following superconducting 9-cell cavity, even for the worst orbit distortion case after two capture cavities, the kick is still acceptable at the exit of the ICM after reaching 10 MeV. This paper presents the analysis of the transverse kick and the projected emittance growth induced by the coupler for β < 1 electrons. The simulated results of the TRIUMF ICM capture cavities are described and presented.

  15. PREFACE: High Performance Computing Symposium 2011

    NASA Astrophysics Data System (ADS)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  16. High performance aerated lagoon systems

    SciTech Connect

    Rich, L.

    1999-08-01

    At a time when less money is available for wastewater treatment facilities and there is increased competition for the local tax dollar, regulatory agencies are enforcing stricter effluent limits on treatment discharges. A solution for both municipalities and industry is to use aerated lagoon systems designed to meet these limits. This monograph, prepared by a recognized expert in the field, provides methods for the rational design of a wide variety of high-performance aerated lagoon systems. Such systems range from those that can be depended upon to meet secondary treatment standards alone to those that, with the inclusion of intermittent sand filters or elements of sequenced biological reactor (SBR) technology, can also provide for nitrification and nutrient removal. Considerable emphasis is placed on the use of appropriate performance parameters, and an entire chapter is devoted to diagnosing performance failures. Contents include: principles of microbiological processes, control of algae, benthal stabilization, design for CBOD removal, design for nitrification and denitrification in suspended-growth systems, design for nitrification in attached-growth systems, phosphorus removal, diagnosing performance.

  17. High performance Cu adhesion coating

    SciTech Connect

    Lee, K.W.; Viehbeck, A.; Chen, W.R.; Ree, M.

    1996-12-31

    Poly(arylene ether benzimidazole) (PAEBI) is a high performance thermoplastic polymer with imidazole functional groups forming the polymer backbone structure. It is proposed that upon coating PAEBI onto a copper surface the imidazole groups of PAEBI form a bond with or chelate to the copper surface resulting in strong adhesion between the copper and polymer. Adhesion of PAEBI to other polymers such as poly(biphenyl dianhydride-p-phenylene diamine) (BPDA-PDA) polyimide is also quite good and stable. The resulting locus of failure as studied by XPS and IR indicates that PAEBI gives strong cohesive adhesion to copper. Due to its good adhesion and mechanical properties, PAEBI can be used in fabricating thin film semiconductor packages such as multichip module dielectric (MCM-D) structures. In these applications, a thin PAEBI coating is applied directly to a wiring layer for enhancing adhesion to both the copper wiring and the polymer dielectric surface. In addition, a thin layer of PAEBI can also function as a protection layer for the copper wiring, eliminating the need for Cr or Ni barrier metallurgies and thus significantly reducing the number of process steps.

  18. ALMA high performance nutating subreflector

    NASA Astrophysics Data System (ADS)

    Gasho, Victor L.; Radford, Simon J. E.; Kingsley, Jeffrey S.

    2003-02-01

    For the international ALMA project"s prototype antennas, we have developed a high performance, reactionless nutating subreflector (chopping secondary mirror). This single axis mechanism can switch the antenna"s optical axis by +/-1.5" within 10 ms or +/-5" within 20 ms and maintains pointing stability within the antenna"s 0.6" error budget. The light weight 75 cm diameter subreflector is made of carbon fiber composite to achieve a low moment of inertia, <0.25 kg m2. Its reflecting surface was formed in a compression mold. Carbon fiber is also used together with Invar in the supporting structure for thermal stability. Both the subreflector and the moving coil motors are mounted on flex pivots and the motor magnets counter rotate to absorb the nutation reaction force. Auxiliary motors provide active damping of external disturbances, such as wind gusts. Non contacting optical sensors measure the positions of the subreflector and the motor rocker. The principle mechanical resonance around 20 Hz is compensated with a digital PID servo loop that provides a closed loop bandwidth near 100 Hz. Shaped transitions are used to avoid overstressing mechanical links.

  19. Adjustable Josephson Coupler for Transmon Qubit Measurement

    NASA Astrophysics Data System (ADS)

    Jeffrey, Evan

    2015-03-01

    Transmon qubits are measured via a dispersive interaction with a linear resonator. In order to be scalable this measurement must be fast, accurate, and not disrupt the state of the qubit. Speed is of particular importance in a scalable architecture with error correction as the measurement accounts for substantial portion of the cycle time and waiting time associated with measurement is a major source of decoherence. We have found that measurement speed and accuracy can be improved by driving the qubit beyond the critical photon number ncrit = Δ/4g by a factor of 2-3 without compromising the QND nature of the measurement. While it is expected that such strong drive will cause qubit state transitions, we find that as long as the readout is sufficiently fast, those transitions are negligible, however they grow rapidly with time, and are not described by a simple rate. Measuring in this regime requires parametric amplifiers with very high saturation power, on the order of -105 dBm in order to avoid losing SNR when increasing the power. It also requires a Purcell filter to allow fast ring-up and ring-down. Adjustable couplers can be used to further increase the measurement performance, by switching the dispersive interaction on and off much faster than the cavity ring-down time. This technique can also be used to investigate the dynamics of the qubit cavity interaction beyond the weak dispersive limit ncavity >=ncrit not easily accessible to standard dispersive measurement due to the cavity time constant.

  20. A Component Architecture for High-Performance Computing

    SciTech Connect

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  1. Calibration of the ERL cavity FPC and PU couplers

    SciTech Connect

    Hahn, H.; Johnson, E.; Kayran, D.

    2010-04-05

    The performance parameters of a superconducting cavity, notably accelerating field and quality factor, are first obtained in a cryogenic vertical test Dewar, and again after the final assembly in its cryostat. The tests involve Network Analyzer (NA) measurements in which the cavity is excited through an input coupler and the properties are obtained from the reflected signal at the input and the transmitted signal from the output coupler. The interpretation of the scattering coefficients in terms of field strength requires the knowledge of the Fundamental Power Coupler (FPC) and Pick-Up (PU) coupler strength, as expressed by their 'external' and Q{sub FPC} Q{sub PU}. The coupler strength is independent of the field level or cavity losses and thus can be determined at low levels with the scattering coefficients S{sub 11} and S{sub 21}, assuming standard 50 {Omega} terminations in the network analyzer. Also needed is the intrinsic cavity parameter, R{sub a} /Q{sub 0} {triple_bond} {l_brace}R/Q{r_brace}, a quantity independent of field or losses which must be obtained from simulation programs, such as the Microwave Studio.

  2. Apodized grating coupler using fully-etched nanostructures

    NASA Astrophysics Data System (ADS)

    Wu, Hua; Li, Chong; Li, Zhi-Yong; Guo, Xia

    2016-08-01

    A two-dimensional apodized grating coupler for interfacing between single-mode fiber and photonic circuit is demonstrated in order to bridge the mode gap between the grating coupler and optical fiber. The grating grooves of the grating couplers are realized by columns of fully etched nanostructures, which are utilized to digitally tailor the effective refractive index of each groove in order to obtain the Gaussian-like output diffractive mode and then enhance the coupling efficiency. Compared with that of the uniform grating coupler, the coupling efficiency of the apodized grating coupler is increased by 4.3% and 5.7%, respectively, for the nanoholes and nanorectangles as refractive index tunes layer. Project supported by the National Natural Science Foundation of China (Grant Nos. 61222501, 61335004, and 61505003), the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20111103110019), the Postdoctoral Science Foundation of Beijing Funded Project, China (Grant No. Q6002012201502), and the Science and Technology Research Project of Jiangxi Provincial Education Department, China (Grant No. GJJ150998).

  3. Dealing with Multipacting in Fundamental Power Couplers for SRF Cavities

    SciTech Connect

    Mircea Stirbet

    2005-03-19

    Multipacting events are well known and bothersome discharge phenomena specific to vacuum and RF exposed surfaces. Left uncontrolled, these events could affect normal machine operation, limiting performance or inducing irreversible damage of critical components such as ceramic windows. Numerical simulations have been developed and their predictions fit fairly well with real multipacting events in coaxial lines or waveguide-type fundamental power couplers. Controlling multipacting must be considered from the design stage, as well as during manufacture of subassemblies or preparation of the coupler for cavity assembly. All fundamental power couplers must be conditioned using a high power RF source, and during this process, restricting multipacting by adequate instrumentation should be considered. After RF conditioning, during beam acceleration, control of multipacting is achieved with field perturbation methods. This paper summarizes our experience in dealing with multipacting in CW or pulsed fundamental power couplers (LEP, LHC, SNS and RIA) for SRF cavities. The SNS fundamental power coupler is used as an example for controlling multipacting during high power RF conditioning.

  4. Apodized grating coupler using fully-etched nanostructures

    NASA Astrophysics Data System (ADS)

    Wu, Hua; Li, Chong; Li, Zhi-Yong; Guo, Xia

    2016-08-01

    A two-dimensional apodized grating coupler for interfacing between single-mode fiber and photonic circuit is demonstrated in order to bridge the mode gap between the grating coupler and optical fiber. The grating grooves of the grating couplers are realized by columns of fully etched nanostructures, which are utilized to digitally tailor the effective refractive index of each groove in order to obtain the Gaussian-like output diffractive mode and then enhance the coupling efficiency. Compared with that of the uniform grating coupler, the coupling efficiency of the apodized grating coupler is increased by 4.3% and 5.7%, respectively, for the nanoholes and nanorectangles as refractive index tunes layer. Project supported by the National Natural Science Foundation of China (Grant Nos. 61222501, 61335004, and 61505003), the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20111103110019), the Postdoctoral Science Foundation of Beijing Funded Project, China (Grant No. Q6002012201502), and the Science and Technology Research Project of Jiangxi Provincial Education Department, China (Grant No. GJJ150998).

  5. A Component Architecture for High-Performance Scientific Computing

    SciTech Connect

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  6. Ultrasensitive optical microfiber coupler based sensors operating near the turning point of effective group index difference

    NASA Astrophysics Data System (ADS)

    Li, Kaiwei; Zhang, Ting; Liu, Guigen; Zhang, Nan; Zhang, Mengying; Wei, Lei

    2016-09-01

    We propose and study an optical microfiber coupler (OMC) sensor working near the turning point of effective group index difference between the even supermode and odd supermode to achieve high refractive index (RI) sensitivity. Theoretical calculations reveal that infinite sensitivity can be obtained when the measured RI is close to the turning point value. This diameter-dependent turning point corresponds to the condition that the effective group index difference equals zero. To validate our proposed sensing mechanism, we experimentally demonstrate an ultrahigh sensitivity of 39541.7 nm/RIU at a low ambient RI of 1.3334 based on an OMC with the diameter of 1.4 μm. An even higher sensitivity can be achieved by carrying out the measurements at RI closer to the turning point. The resulting ultrasensitive RI sensing platform offers a substantial impact on a variety of applications from high performance trace analyte detection to small molecule sensing.

  7. Carpet Aids Learning in High Performance Schools

    ERIC Educational Resources Information Center

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  8. Optical interconnection networks for high-performance computing systems.

    PubMed

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  9. 30 CFR 77.805 - Cable couplers and connection boxes; minimum design requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.805 Cable couplers and...) Couplers shall be constructed to cause the ground check continuity conductor to break first and the...

  10. 30 CFR 77.805 - Cable couplers and connection boxes; minimum design requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.805 Cable couplers and connection boxes; minimum design requirements. (a)(1) Couplers that are used in medium- or high-voltage...

  11. Dynamic performances of an innovative coupler used in heavy haul trains

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Zhang, Xiaoxia; Zhang, Hongjun

    2014-10-01

    An innovative structure for a heavy haul coupler with an arc surface contact and restoring bumpstop is proposed. This coupler has a small lateral force at a small yaw angle and a limitable yaw angle to ensure an allowable coupler lateral force under intense compressive force. The main structural characteristic of the combined contact coupler is a lateral movable follower with an appropriate friction coefficient of 0.06-0.08 and a slide block with a single freedom of longitudinal movement. In order to verify and simulate the performances, a multi-body dynamics model with four heavy haul locomotives and three detailed couplers was established to simulate the process of emergency braking. In addition, the coupler yaw instability and wheel set lateral forces were tested in order to investigate the effect of relevant parameters on the coupler performances. The combined contact coupler is suitable for heavy haul train for a good dynamic performance.

  12. Understanding and Improving High-Performance I/O Subsystems

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Frieder, Gideon; Clark, A. James

    1996-01-01

    This research program has been conducted in the framework of the NASA Earth and Space Science (ESS) evaluations led by Dr. Thomas Sterling. In addition to the many important research findings for NASA and the prestigious publications, the program has helped orienting the doctoral research program of two students towards parallel input/output in high-performance computing. Further, the experimental results in the case of the MasPar were very useful and helpful to MasPar with which the P.I. has had many interactions with the technical management. The contributions of this program are drawn from three experimental studies conducted on different high-performance computing testbeds/platforms, and therefore presented in 3 different segments as follows: 1. Evaluating the parallel input/output subsystem of a NASA high-performance computing testbeds, namely the MasPar MP- 1 and MP-2; 2. Characterizing the physical input/output request patterns for NASA ESS applications, which used the Beowulf platform; and 3. Dynamic scheduling techniques for hiding I/O latency in parallel applications such as sparse matrix computations. This study also has been conducted on the Intel Paragon and has also provided an experimental evaluation for the Parallel File System (PFS) and parallel input/output on the Paragon. This report is organized as follows. The summary of findings discusses the results of each of the aforementioned 3 studies. Three appendices, each containing a key scholarly research paper that details the work in one of the studies are included.

  13. Design of the new couplers for C-ADS RFQ

    NASA Astrophysics Data System (ADS)

    Shi, Ai-Min; Sun, Lie-Peng; Zhang, Zhou-Li; Xu, Xian-Bo; Shi, Long-Bo; Li, Chen-Xing; Wang, Wen-Bin

    2015-04-01

    A new special coupler with a kind of bowl-shaped ceramic window for a proton linear accelerator named the Chinese Accelerator Driven System (C-ADS) at the Institute of Modern Physics (IMP) has been simulated and constructed and a continuous wave (CW) beam commissioning through a four-meter long radio frequency quadruple (RFQ) was completed by the end of July 2014. In the experiments of conditioning and beam, some problems were promoted gradually such as sparking and thermal issues. Finally, two new couplers were passed with almost 110 kW CW power and 120 kW pulsed mode, respectively. The 10 mA intensity beam experiments have now been completed, and the couplers during the operation had no thermal or electro-magnetic problems. The detailed design and results are presented in the paper. Supported by Strategic Priority Research Program of Chinese Academy of Sciences (XDA03020500)

  14. Thin-ribbon tapered coupler for dielectric waveguides

    NASA Technical Reports Server (NTRS)

    Yeh, C.; Otoshi, T. Y.; Shimabukuro, F. I.

    1994-01-01

    A recent discovery shows that a high-dielectric constant, low-loss, solid material can be made into a ribbon-like waveguide structure to yield an attenuation constant of less than 0.02 dB/m for single-mode guidance of millimeter/submillimeter waves. One of the crucial components that must be invented in order to guarantee the low-loss utilization of this dielectric-waveguide guiding system is the excitation coupler. The traditional tapered-to-a-point coupler for a dielectric rod waveguide fails when the dielectric constant of the dielectric waveguide is large. This article presents a new way to design a low-loss coupler for a high- or low-dielectric constant dielectric waveguide for millimeter or submillimeter waves.

  15. Simulations of optical sensors fabricated from metallic rods couplers

    SciTech Connect

    Singh, M. R.; Balakrishanan, Shankar

    2014-03-31

    We have developed the optical sensing mechanism of photonic couplers fabricated from the periodically arranged metallic rods. The metallic rod lattice is embedded between two dielectric material waveguides. This structure is called metallic coupler. Using the transfer matrix method, expressions for the reflection and transmission coefficients of electromagnetic wave propagating in waveguides have been obtained. We found that for certain energies, the electromagnetic wave is totally reflected from the coupler. Similarly, for a certain energy range the light is totally transmitted. It has also been found that by changing the periodicity of the metallic rods, the transmitted energy can be reflected. The periodicity of the metallic lattice can be modified by applying an external stress or pressure. In other words, the system can be used as stress and pressure sensors. The present findings can be used to make new types photonic sensors.

  16. Presentation of floating mass transducer and Vibroplasty couplers on CT and cone beam CT.

    PubMed

    Mlynski, Robert; Nguyen, Thi Dao; Plontke, Stefan K; Kösling, Sabrina

    2014-04-01

    Various titanium coupling elements, Vibroplasty Couplers, maintaining the attachment of the Floating Mass Transducer (FMT) of the active middle ear implant Vibrant Soundbridge (VSB) to the round window, the stapes suprastructure or the stapes footplate are in use to optimally transfer energy from the FMT to the inner ear fluids. In certain cases it is of interest to radiologically verify the correct position of the FMT coupler assembly. The imaging appearance of FMT connected to these couplers, however, is not well known. The aim of this study was to present the radiological appearance of correctly positioned Vibroplasty Couplers together with the FMT using two different imaging techniques. Vibroplasty Couplers were attached to the FMT of a Vibrant Soundbridge and implanted in formalin fixed human temporal bones. Five FMT coupler assemblies were implanted in different positions: conventionally to the incus, a Bell-Coupler, a CliP-Coupler, a Round Window-Coupler and an Oval Window-Coupler. High spatial resolution imaging with Multi-Detector CT (MDCT) and Cone Beam CT (CBCT) was performed in each specimen. Images were blind evaluated by two radiologists on a visual basis. Middle ear details, identification of FMT and coupler, position of FMT coupler assembly and artefacts were assessed. CBCT showed a better spatial resolution and a higher visual image quality than MDCT, but there was no significant advantage over MDCT in delineating the structures or the temporal bone of the FMT Coupler assemblies. The FMT with its coupler element could be clearly identified in the two imaging techniques. The correct positioning of the FMT and all types of couplers could be demonstrated. Both methods, MDCT and CBCT, are appropriate methods for postoperative localization of FMT in combination with Vibroplasty Couplers and for verifying their correct position. If CBCT is available, this method is recommended due to the better spatial resolution and less metal artifacts. PMID:23529745

  17. Fiber optic data bus using Frequency Division Multiplexing (FDM) and an asymmetric coupler

    NASA Technical Reports Server (NTRS)

    Zanger, M.; Webster, L.

    1984-01-01

    A fiber optic data bus, using frequency division multiplexing (FDM) is discussed. The use of FDM is motivated by the need to avoid central control of the bus operation. A major difficulty of such a data bus is introduced by the couplers. An efficient low loss access coupler with an asymmetric structure is presented, and manufacturing processes for the coupler are proposed.

  18. 30 CFR 77.805 - Cable couplers and connection boxes; minimum design requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.805 Cable couplers and connection boxes; minimum design requirements. (a)(1) Couplers that are used in medium- or high-voltage power... materials other than metal. (2) Cable couplers shall be adequate for the intended current and voltage....

  19. 30 CFR 77.805 - Cable couplers and connection boxes; minimum design requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.805 Cable couplers and connection boxes; minimum design requirements. (a)(1) Couplers that are used in medium- or high-voltage power... materials other than metal. (2) Cable couplers shall be adequate for the intended current and voltage....

  20. 30 CFR 77.805 - Cable couplers and connection boxes; minimum design requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... WORK AREAS OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.805 Cable couplers and connection boxes; minimum design requirements. (a)(1) Couplers that are used in medium- or high-voltage power... materials other than metal. (2) Cable couplers shall be adequate for the intended current and voltage....

  1. Output coupler design of unstable cavities for excimer lasers.

    PubMed

    Giuri, C; Perrone, M R; Piccinno, V

    1997-02-20

    We tested the performance of a XeCl laser with unstable resonators using as an output coupler a phase unifying (PU) mirror, a super-Gaussian mirror, and a hard-edge mirror. The quantitative impact of the output coupler design on the energy extraction efficiency, near-field profile, far-field energy distribution, and spatial coherence time evolution has been investigated. Laser beams of larger brightness have been obtained with the PU unstable cavity. A faster growth of the laser beam spatial coherence has been observed with the PU cavity by time-resolved, far-field measurements. PMID:18250783

  2. A Miniaturized Branch Line Coupler with Harmonic Suppression

    NASA Astrophysics Data System (ADS)

    Hayati, Mohsen; Ehteshami, Mahin

    2014-07-01

    In this paper, a miniaturized branch-line coupler with harmonic suppression is presented. Approximately 80% size reduction in comparison with the conventional branch-line coupler is achieved using the step impedance bent transmission line, loaded by in-line beeline structure. Experimental result shows more than 24 dB suppression for primary odd harmonics (3rd, 5th, and 7th). The measured relative low insertion loss (3.2 dB) and desired coupling (3.1 dB) as well as 89.7° phase difference between through and coupled ports for the center frequency are verified.

  3. Subwavelength-grating-assisted broadband polarization-independent directional coupler.

    PubMed

    Liu, Lu; Deng, Qingzhong; Zhou, Zhiping

    2016-04-01

    This Letter presents both numerical and experimental results of a polarization-independent directional coupler based on slot waveguides with a subwavelength grating. The measured coupling efficiency is 97.4% for TE and 96.7% for TM polarization at a wavelength of 1550 nm. Further analysis shows that the proposed subwavelength grating directional coupler has a fabrication tolerance of ±20  nm for the grating structure and that the coupling efficiencies for the two polarizations are both higher than -0.5  dB (∼89%), exceeding the entire C-band (1525-1570 nm) experimentally. PMID:27192309

  4. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  5. A History of the Chemical Innovations in Silver-Halide Materials for Color PhotographyII. Color-Forming Development, Part 5. Coupler Innovations after the 1970's—Two-Equivalent Coupler and DIR Coupler

    NASA Astrophysics Data System (ADS)

    Oishi, Yasushi

    After the 1970's on, several manufacturers including Fuji Film, Konica and Agfa-Gevaert participated in innovating color photographic materials by adding their own coupler chemistry to the technological architecture built by Kodak before then. One area of their major advances was development of the couplers having a coupling-off organic group. One of their functional forms was two-equivalent coupler which made the dye-forming process efficient and made the photosensitive layers slim. And another was DIR coupler which improved dramatically the image quality of color negative materials. In this paper a historical overview of these innovations is constructed from the technical documents, mainly patents.

  6. Statistical properties of high performance cesium standards

    NASA Technical Reports Server (NTRS)

    Percival, D. B.

    1973-01-01

    The intermediate term frequency stability of a group of new high-performance cesium beam tubes at the U.S. Naval Observatory were analyzed from two viewpoints: (1) by comparison of the high-performance standards to the MEAN(USNO) time scale and (2) by intercomparisons among the standards themselves. For sampling times up to 5 days, the frequency stability of the high-performance units shows significant improvement over older commercial cesium beam standards.

  7. High performance carbon nanocomposites for ultracapacitors

    DOEpatents

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  8. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  9. Analysis of a magneto-rheological coupler with misalignment

    NASA Astrophysics Data System (ADS)

    Lu, Guangtao; Li, Yourong; Song, Gangbing

    2011-10-01

    In most modeling analyses of drum type magneto-rheological (MR) fluid coupling devices, the centerlines of the drums are often assumed to be coincident; however, misalignments often exist in many practical applications, which change the characteristics of the MR fluid devices. This paper focuses on the characteristics of a rotary drum type MR coupler with a misalignment. In this paper, we discuss the operational modes of MR fluids and derive equations to describe the behavior of MR fluids in MR couplers with misalignment by using the Navier-Stokes equation. Based on the results of numerical computation, we found that the misalignment due to different tolerance grades has a limited effect on the pressure in the drums, the shear stress of the MR fluid, and the torque of an MR coupler. However, when the eccentricity ratio increases to 23.3% or the tolerance grade of the two drums is 10 (ISO standard), the effect, especially the effect on the pressure, increases greatly, and should be considered during the design of seals for MR couplers. In addition, the pressure and the shear forces will apply forces on the drums along the vertical and horizontal directions, and change periodically as the drums rotate, which should be considered when designing them.

  10. 49 CFR 179.14 - Coupler vertical restraint system.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... material failure, vertical loads of at least 200,000 pounds (90,718.5 kg) applied in upward and downward directions in combination with buff loads of 2,000 pounds (907.2 kg), when coupled to cars which may or may... this section shall be achieved by verification testing of the coupler vertical restraint system...

  11. Interferometer using a 3 × 3 coupler and Faraday mirrors

    NASA Astrophysics Data System (ADS)

    Breguet, J.; Gisin, N.

    1995-06-01

    A new interferometric setup using a 3 \\times 3 coupler and two Faraday mirrors is presented. It has the advantages of being built only with passive components, of freedom from the polarization fading problem, and of operation with a LED. It is well suited for sensing time-dependent signals and does not depend on reciprocal or nonreciprocal constant perturbations.

  12. Common Factors of High Performance Teams

    ERIC Educational Resources Information Center

    Jackson, Bruce; Madsen, Susan R.

    2005-01-01

    Utilization of work teams is now wide spread in all types of organizations throughout the world. However, an understanding of the important factors common to high performance teams is rare. The purpose of this content analysis is to explore the literature and propose findings related to high performance teams. These include definition and types,…

  13. Properties Of High-Performance Thermoplastics

    NASA Technical Reports Server (NTRS)

    Johnston, Norman J.; Hergenrother, Paul M.

    1992-01-01

    Report presents review of principal thermoplastics (TP's) used to fabricate high-performance composites. Sixteen principal TP's considered as candidates for fabrication of high-performance composites presented along with names of suppliers, Tg, Tm (for semicrystalline polymers), and approximate maximum processing temperatures.

  14. An Associate Degree in High Performance Manufacturing.

    ERIC Educational Resources Information Center

    Packer, Arnold

    In order for more individuals to enter higher paying jobs, employers must create a sufficient number of high-performance positions (the demand side), and workers must acquire the skills needed to perform in these restructured workplaces (the supply side). Creating an associate degree in High Performance Manufacturing (HPM) will help address four…

  15. High performance image processing of SPRINT

    SciTech Connect

    DeGroot, T.

    1994-11-15

    This talk will describe computed tomography (CT) reconstruction using filtered back-projection on SPRINT parallel computers. CT is a computationally intensive task, typically requiring several minutes to reconstruct a 512x512 image. SPRINT and other parallel computers can be applied to CT reconstruction to reduce computation time from minutes to seconds. SPRINT is a family of massively parallel computers developed at LLNL. SPRINT-2.5 is a 128-node multiprocessor whose performance can exceed twice that of a Cray-Y/MP. SPRINT-3 will be 10 times faster. Described will be the parallel algorithms for filtered back-projection and their execution on SPRINT parallel computers.

  16. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  17. Cray XMT Brings New Energy to High-Performance Computing

    SciTech Connect

    Chavarría-Miranda, Daniel; Gracio, Deborah K.; Marquez, Andres; Nieplocha, Jaroslaw; Scherrer, Chad; Sofia, Heidi J.

    2008-09-30

    The ability to solve our nation’s most challenging problems—whether it’s cleaning up the environment, finding alternative forms of energy or improving public health and safety—requires new scientific discoveries. High performance experimental and computational technologies from the past decade are helping to accelerate these scientific discoveries, but they introduce challenges of their own. The vastly increasing volumes and complexities of experimental and computational data pose significant challenges to traditional high-performance computing (HPC) platforms as terabytes to petabytes of data must be processed and analyzed. And the growing complexity of computer models that incorporate dynamic multiscale and multiphysics phenomena place enormous demands on high-performance computer architectures. Just as these new challenges are arising, the computer architecture world is experiencing a renaissance of innovation. The continuing march of Moore’s law has provided the opportunity to put more functionality on a chip, enabling the achievement of performance in new ways. Power limitations, however, will severely limit future growth in clock rates. The challenge will be to obtain greater utilization via some form of on-chip parallelism, but the complexities of emerging applications will require significant innovation in high-performance architectures. The Cray XMT, the successor to the Tera/Cray MTA, provides an alternative platform for addressing computations that stymie current HPC systems, holding the potential to substantially accelerate data analysis and predictive analytics for many complex challenges in energy, national security and fundamental science that traditional computing cannot do.

  18. Implementing High Performance Remote Method Invocation in CCA

    SciTech Connect

    Yin, Jian; Agarwal, Khushbu; Krishnan, Manoj Kumar; Chavarría-Miranda, Daniel; Gorton, Ian; Epperly, Thomas G.

    2011-09-30

    We report our effort in engineering a high performance remote method invocation (RMI) mechanism for the Common Component Architecture (CCA). This mechanism provides a highly efficient and easy-to-use mechanism for distributed computing in CCA, enabling CCA applications to effectively leverage parallel systems to accelerate computations. This work is built on the previous work of Babel RMI. Babel is a high performance language interoperability tool that is used in CCA for scientific application writers to share, reuse, and compose applications from software components written in different programming languages. Babel provides a transparent and flexible RMI framework for distributed computing. However, the existing Babel RMI implementation is built on top of TCP and does not provide the level of performance required to distribute fine-grained tasks. We observed that the main reason the TCP based RMI does not perform well is because it does not utilize the high performance interconnect hardware on a cluster efficiently. We have implemented a high performance RMI protocol, HPCRMI. HPCRMI achieves low latency by building on top of a low-level portable communication library, Aggregated Remote Message Copy Interface (ARMCI), and minimizing communication for each RMI call. Our design allows a RMI operation to be completed by only two RDMA operations. We also aggressively optimize our system to reduce copying. In this paper, we discuss the design and our experimental evaluation of this protocol. Our experimental results show that our protocol can improve RMI performance by an order of magnitude.

  19. Strategy Guideline: High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  20. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  1. High-performance computing in structural mechanics and engineering

    SciTech Connect

    Adeli, H.; Kamat, M.P.; Kulkarni, G.; Vanluchene, R.D. Georgia Inst. of Technology, Atlanta Montana State Univ., Bozeman )

    1993-07-01

    Recent advances in computer hardware and software have made multiprocessing a viable and attractive technology. This paper reviews high-performance computing methods in structural mechanics and engineering through the use of a new generation of multiprocessor computers. The paper presents an overview of vector pipelining, performance metrics for parallel and vector computers, programming languages, and general programming considerations. Recent developments in the application of concurrent processing techniques to the solution of structural mechanics and engineering problems are reviewed, with special emphasis on linear structural analysis, nonlinear structural analysis, transient structural analysis, dynamics of multibody flexible systems, and structural optimization. 64 refs.

  2. High Performance Computing with Harness over InfiniBand

    SciTech Connect

    Valentini, Alessandro; Di Biagio, Christian; Batino, Fabrizio; Pennella, Guido; Palma, Fabrizio; Engelmann, Christian

    2009-01-01

    Harness is an adaptable and plug-in-based middleware framework able to support distributed parallel computing. By now, it is based on the Ethernet protocol which cannot guarantee high performance throughput and real time (determinism) performance. During last years, both, the research and industry environments have developed new network architectures (InfiniBand, Myrinet, iWARP, etc.) to avoid those limits. This paper concerns the integration between Harness and InfiniBand focusing on two solutions: IP over InfiniBand (IPoIB) and Socket Direct Protocol (SDP) technology. They allow the Harness middleware to take advantage of the enhanced features provided by the InfiniBand Architecture.

  3. IBM SP high-performance networking with a GRF.

    SciTech Connect

    Navarro, J.P.

    1999-05-27

    Increasing use of highly distributed applications, demand for faster data exchange, and highly parallel applications can push the limits of conventional external networking for IBM SP sites. In technical computing applications we have observed a growing use of a pipeline of hosts and networks collaborating to collect, process, and visualize large amounts of realtime data. The GRF, a high-performance IP switch from Ascend and IBM, is the first backbone network switch to offer a media card that can directly connect to an SP Switch. This enables switch attached hosts in an SP complex to communicate at near SP Switch speeds with other GRF attached hosts and networks.

  4. Challenges for high-performance networking for exascale computing.

    SciTech Connect

    Barrett, Brian W.; Hemmert, K. Scott; Underwood, Keith Douglas; Brightwell, Ronald Brian

    2010-05-01

    Achieving the next three orders of magnitude performance increase to move from petascale to exascale computing will require a significant advancements in several fundamental areas. Recent studies have outlined many of the challenges in hardware and software that will be needed. In this paper, we examine these challenges with respect to high-performance networking. We describe the repercussions of anticipated changes to computing and networking hardware and discuss the impact that alternative parallel programming models will have on the network software stack. We also present some ideas on possible approaches that address some of these challenges.

  5. Semi-reciprocal polarization maintaining fibre coupler with distinctive transmission characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Xinyue; Thomas, Freya; Wang, Ziyu

    2015-11-01

    Optical couplers are very important devices in optical communication systems and optical sensor systems. Several types of optical couplers with different materials or different transmission characteristics have been reported. Here we propose a semi-reciprocal polarization maintaining fibre coupler with unique transmission characteristics, which is distinct from conventional polarization maintaining fibre couplers and polarization beam splitters, and investigate the characteristics of the coupler theoretically and experimentally. The experimental results show that for circularly and elliptically polarized input light, the proposed coupler will act both as an in-line polariser and a conventional polarization maintaining fibre coupler. The output polarization extinction ratio of the transmission arm is 31.79 dB at a centre wavelength of 841 nm. For linearly polarized input light, the coupler will merely act as a conventional 3 dB polarization maintaining fibre coupler. The unique features of the proposed coupler enables the removal of polarisers from optical sensor systems and coherent optical communication systems, and reduces the insertion loss and production cost of the optical path. Therefore there is wide application for this device in optical sensor systems and optical communication systems.

  6. Semi-reciprocal polarization maintaining fibre coupler with distinctive transmission characteristics

    PubMed Central

    Wang, Xinyue; Thomas, Freya; Wang, Ziyu

    2015-01-01

    Optical couplers are very important devices in optical communication systems and optical sensor systems. Several types of optical couplers with different materials or different transmission characteristics have been reported. Here we propose a semi-reciprocal polarization maintaining fibre coupler with unique transmission characteristics, which is distinct from conventional polarization maintaining fibre couplers and polarization beam splitters, and investigate the characteristics of the coupler theoretically and experimentally. The experimental results show that for circularly and elliptically polarized input light, the proposed coupler will act both as an in-line polariser and a conventional polarization maintaining fibre coupler. The output polarization extinction ratio of the transmission arm is 31.79 dB at a centre wavelength of 841 nm. For linearly polarized input light, the coupler will merely act as a conventional 3 dB polarization maintaining fibre coupler. The unique features of the proposed coupler enables the removal of polarisers from optical sensor systems and coherent optical communication systems, and reduces the insertion loss and production cost of the optical path. Therefore there is wide application for this device in optical sensor systems and optical communication systems. PMID:26611837

  7. Concurrent file operations in a high performance FORTRAN

    NASA Technical Reports Server (NTRS)

    Brezany, Peter; Gerndt, Michael; Mehrotra, Piyush; Zima, Hans

    1992-01-01

    Distributed memory multiprocessor systems can provide the computing power necessary for large scale scientific applications. A critical performance issue for a number of these applications is the efficient transfer of data to secondary storage. Recently several research groups have proposed FORTRAN language extensions for exploiting the data parallelism of such scientific codes on distributed memory architectures. However, few of these high performance FORTRAN's provide appropriate constructs for controlling the use of the parallel I/O capabilities of modern multiprocessing machines. In this paper, we propose constructs to specify I/O operations for distributed data structures in the context of Vienna Fortran. These operations can be used by the programmer to provide information which can help the compiler and runtime environment make the most efficient use of the I/O subsystem.

  8. Productive high-performance software for OpenCL devices

    NASA Astrophysics Data System (ADS)

    Melonakos, John M.; Yalamanchili, Pavan; McClanahan, Chris; Arshad, Umar; Landes, Michael; Jamboti, Shivapriya; Joshi, Abhijit; Mohammed, Shehzan; Spafford, Kyle; Venugopalakrishnan, Vishwanath; Malcolm, James

    2013-05-01

    Over the last three decades, CPUs have continued to produce large performance improvements from one generation to the next. However, CPUs have recently hit a performance wall and need parallel computing to move forward. Parallel computing over the next decade will become increasingly defined by heterogeneous computing, involving the use of accelerators in addition to CPUs to get computational tasks done. In order to use an accelerator, software changes must be made. Regular x86-based compilers cannot compile code to run on accelerators without these needed changes. The amount of software change required varies depending upon the availability of and reliance upon software tools that increase performance and productivity. Writing software that leverages the best parallel computing hardware, adapts well to the rapid pace of hardware updates, and minimizes developer muscle is the industry's goal. OpenCL is the standard around which developers are able to achieve parallel performance. OpenCL itself is too difficult to program to receive general adoptions, but productive high-performing software libraries are becoming increasingly popular and capable in delivering lasting value to user applications.

  9. Design and fabrication tolerance analysis of multimode interference couplers

    NASA Astrophysics Data System (ADS)

    Morrissey, P. E.; Yang, H.; Sheehan, R. N.; Corbett, B.; Peters, F. H.

    2015-04-01

    This paper examines the sensitivity of InP multimode interference couplers (MMIs) to fabrication errors caused by over or under exposure during device processing. MMIs are modelled using modal propagation analysis, which provides a rapid means of simulating the performance of such couplers across a large design space with varying structural parameters. We show for the first time that when MMIs are anlaysed with fabrication errors in mind, there exists an optimal set of design parameters for a given input waveguide width which offer the best tolerance to fabrication errors while maximising optical throughput and ensuring compact size. Such MMIs are ideally suited for use in photonic integrated circuits, where robust performance and smallest possible device footprint are required.

  10. Beam Fields in an Integrated Cavity, Coupler and Window Configuration

    SciTech Connect

    Weathersby, Stephen; Novokhatski, Alexander; /SLAC

    2010-02-10

    In a multi-bunch high current storage ring, beam generated fields couple strongly into the RF cavity coupler structure when beam arrival times are in resonance with cavity fields. In this study the integrated effect of beam fields over several thousand RF periods is simulated for the complete cavity, coupler, window and waveguide system of the PEP-II B-factory storage ring collider. We show that the beam generated fields at frequencies corresponding to several bunch spacings for this case gives rise to high field strength near the ceramic window which could limit the performance of future high current storage rings such as PEP-X or Super B-factories.

  11. Coplanar-waveguide/microstrip probe coupler and applications to antennas

    NASA Technical Reports Server (NTRS)

    Simons, R. N.; Lee, R. Q.

    1990-01-01

    A method to couple microwave power from a coplanar waveguide to a microstrip line on opposite sides of a ground plane is demonstrated. The coupler uses a metallic post which passes through an aperture on the ground plane connecting the strip conductor of the coplanar waveguide to the microstrip line. The measured insertion loss and return loss are about 1 dB and 10 dB, respectively, across the frequency range of 0.045-6.5 GHz. To demonstrate potential applications of the coupler as a feeding network for a microstrip patch array, measured radiation patterns of two rectangular patch antennas with a direct coplanar-waveguide/microstrip feed and with a proximity coupled coplanar-waveguide/microstrip feed are presented.

  12. A submicron broadband surface-plasmon-polariton unidirectional coupler

    PubMed Central

    Liao, Huimin; Li, Zhi; Chen, Jianjun; Zhang, Xiang; Yue, Song; Gong, Qihuang

    2013-01-01

    The manipulation of light propagation is a basic subject in optics and has many important applications. With the development of nano-optics, this area has been downscaled to wavelength or even subwavelength scales. One of the most efficient ways to control light propagation is to exploit interference effects. Here, by manipulating the interference between two nanogrooves on a metal surface, we realize a submicron broadband surface-plasmon-polariton (SPP) unidirectional coupler. More importantly, we find an anomalous bandwidth shrinking behavior in the proposed SPP unidirectional coupler as the groove separation is down to a subwavelength scale of one-quarter of the SPP wavelength. This abnormal behavior is well explained by considering the contribution of the near-field quasi-cylindrical waves in addition to the interference of propagating SPPs and the dispersion effects of individual grooves. Such near-field effects provide new opportunities for the design of ultracompact optical devices. PMID:23728422

  13. Coupled resonator filter with single-layer acoustic coupler.

    PubMed

    Jamneala, Tiberiu; Small, Martha; Ruby, Rich; Larson, John D

    2008-10-01

    We discuss the operation of novel coupled-resonator filters with single-layer acoustic couplers. Our analysis employs the physical Mason model for acoustic resonators. Their simpler fabrication process is counterbalanced by the high acoustic attenuation of suitable coupler materials. At high levels of attenuation, both the phase and the acoustic impedance must be treated as complex quantities to accurately predict the filter insertion loss. We demonstrate that the typically poor near-band rejection of coupled resonator filters can be improved at the die level by connecting a small capacitance between the input and output of the filter to produce a pair of tunable transmission minima. We make use of these theoretical findings to fabricate coupled resonators filters operating at 2.45 GHz. PMID:18986880

  14. High-performance simulations for atmospheric pressure plasma reactor

    NASA Astrophysics Data System (ADS)

    Chugunov, Svyatoslav

    Plasma-assisted processing and deposition of materials is an important component of modern industrial applications, with plasma reactors sharing 30% to 40% of manufacturing steps in microelectronics production. Development of new flexible electronics increases demands for efficient high-throughput deposition methods and roll-to-roll processing of materials. The current work represents an attempt of practical design and numerical modeling of a plasma enhanced chemical vapor deposition system. The system utilizes plasma at standard pressure and temperature to activate a chemical precursor for protective coatings. A specially designed linear plasma head, that consists of two parallel plates with electrodes placed in the parallel arrangement, is used to resolve clogging issues of currently available commercial plasma heads, as well as to increase the flow-rate of the processed chemicals and to enhance the uniformity of the deposition. A test system is build and discussed in this work. In order to improve operating conditions of the setup and quality of the deposited material, we perform numerical modeling of the plasma system. The theoretical and numerical models presented in this work comprehensively describe plasma generation, recombination, and advection in a channel of arbitrary geometry. Number density of plasma species, their energy content, electric field, and rate parameters are accurately calculated and analyzed in this work. Some interesting engineering outcomes are discussed with a connection to the proposed setup. The numerical model is implemented with the help of high-performance parallel technique and evaluated at a cluster for parallel calculations. A typical performance increase, calculation speed-up, parallel fraction of the code and overall efficiency of the parallel implementation are discussed in details.

  15. Two-mode multiplexer and demultiplexer based on adiabatic couplers.

    PubMed

    Xing, Jiejiang; Li, Zhiyong; Xiao, Xi; Yu, Jinzhong; Yu, Yude

    2013-09-01

    A two-mode (de)multiplexer based on adiabatic couplers is proposed and experimentally demonstrated. The experimental results are in good agreement with the simulations. An ultralow mode cross talk below -36 dB and a low insertion loss of about 0.3 dB over a broad bandwidth from 1500 to 1600 nm are measured. The design is also fabrication-tolerant, and the insertion loss can be further improved in the future. PMID:23988986

  16. The Asymmetric Active Coupler: Stable Nonlinear Supermodes and Directed Transport

    PubMed Central

    Kominis, Yannis; Bountis, Tassos; Flach, Sergej

    2016-01-01

    We consider the asymmetric active coupler (AAC) consisting of two coupled dissimilar waveguides with gain and loss. We show that under generic conditions, not restricted by parity-time symmetry, there exist finite-power, constant-intensity nonlinear supermodes (NS), resulting from the balance between gain, loss, nonlinearity, coupling and dissimilarity. The system is shown to possess non-reciprocal dynamics enabling directed power transport functionality. PMID:27640818

  17. Quantum superchemistry in an output coupler of coherent matter waves

    SciTech Connect

    Jing, H.; Cheng, J.

    2006-12-15

    We investigate the quantum superchemistry or Bose-enhanced atom-molecule conversions in a coherent output coupler of matter waves, as a simple generalization of the two-color photoassociation. The stimulated effects of molecular output step and atomic revivals are exhibited by steering the rf output couplings. The quantum noise-induced molecular damping occurs near a total conversion in a levitation trap. This suggests a feasible two-trap scheme to make a stable coherent molecular beam.

  18. The Asymmetric Active Coupler: Stable Nonlinear Supermodes and Directed Transport.

    PubMed

    Kominis, Yannis; Bountis, Tassos; Flach, Sergej

    2016-01-01

    We consider the asymmetric active coupler (AAC) consisting of two coupled dissimilar waveguides with gain and loss. We show that under generic conditions, not restricted by parity-time symmetry, there exist finite-power, constant-intensity nonlinear supermodes (NS), resulting from the balance between gain, loss, nonlinearity, coupling and dissimilarity. The system is shown to possess non-reciprocal dynamics enabling directed power transport functionality. PMID:27640818

  19. The Asymmetric Active Coupler: Stable Nonlinear Supermodes and Directed Transport

    NASA Astrophysics Data System (ADS)

    Kominis, Yannis; Bountis, Tassos; Flach, Sergej

    2016-09-01

    We consider the asymmetric active coupler (AAC) consisting of two coupled dissimilar waveguides with gain and loss. We show that under generic conditions, not restricted by parity-time symmetry, there exist finite-power, constant-intensity nonlinear supermodes (NS), resulting from the balance between gain, loss, nonlinearity, coupling and dissimilarity. The system is shown to possess non-reciprocal dynamics enabling directed power transport functionality.

  20. Terahertz quantum well photodetectors with reflection-grating couplers

    SciTech Connect

    Zhang, R.; Fu, Z. L.; Gu, L. L.; Guo, X. G.; Cao, J. C.

    2014-12-08

    The design, fabrication, and characterization of terahertz (THz) quantum well photodetectors with one-dimensional reflection-grating coupler are presented. It is found that the reflection gratings could effectively couple the THz waves normally incident to the device. Compared with the 45-degree facet sample, the peak responsivity of this grating-coupled detector is enhanced by over 20%. The effects of the gratings on the photocurrent spectra are also analyzed.

  1. Pseudo-circulator implemented as a multimode fiber coupler

    NASA Astrophysics Data System (ADS)

    Bulota, F.; Bélanger, P.; Leduc, M.; Boudoux, C.; Godbout, N.

    2016-03-01

    We present a linear all-fiber device exhibiting the functionality of a circulator, albeit for multimode fibers. We define a pseudo-circulator as a linear three-port component that transfers most of a multimode light signal from Port 1 to Port 2, and from Port 2 to Port 3. Unlike a traditional circulator which depends on a nonlinear phenomenon to achieve a non-reciprocal behavior, our device is a linear component that seemingly breaks the principle of reciprocity by exploiting the variations of etendue of the multimode fibers in the coupler. The pseudo-circulator is implemented as a 2x2 asymmetric multimode fiber coupler, fabricated using the fusion-tapering technique. The coupler is asymmetric in its transverse fused section. The two multimode fibers differ in area, thus favoring the transfer of light from the smaller to the bigger fiber. The desired difference of area is obtained by tapering one of the fiber before the fusion process. Using this technique, we have successfully fabricated a pseudo-circulator surpassing in efficiency a 50/50 beam-splitter. In all the visible and near-IR spectrum, the transmission ratio exceeds 77% from Port 1 to Port 2, and 80% from Port 2 to Port 3. The excess loss is less than 0.5 dB, regardless of the entry port.

  2. Inductive coupler for downhole components and method for making same

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Dahlgren, Scott; Briscoe, Michael A.; Sneddon, Cameron; Fox, Joe

    2006-05-09

    The present invention includes a method of making an inductive coupler for downhole components. The method includes providing an annular housing, preferably made of steel, the housing having a recess. A conductor, preferably an insulated wire, is also provided along with a plurality of generally U-shaped magnetically conducting, electrically insulating (MCEI) segments. Preferably, the MCEI segments comprise ferrite. An assembly is formed by placing the plurality of MCEI segments within the recess in the annular housing. The segments are aligned to form a generally circular trough. A first portion of the conductor is placed within the circular trough. This assembly is consolidated with a meltable polymer which fills spaces between the segments, annular housing and the first portion of the conductor. The invention also includes an inductive coupler including an annular housing having a recess defined by a bottom portion and two opposing side wall portions. At least one side wall portion includes a lip extending toward but not reaching the other side wall portion. A plurality of generally U-shaped MCEI segments, preferably comprised of ferrite, are disposed in the recess and aligned so as to form a circular trough. The coupler further includes a conductor disposed within the circular trough and a polymer filling spaces between the segments, the annular housing and the conductor.

  3. Design of RF power coupler for superconducting cavities

    NASA Astrophysics Data System (ADS)

    Kutsaev, S. V.; Kelly, M. P.; Ostroumov, P. N.

    2012-11-01

    A new power coupler has been designed and is being prototyped by Argonne National Laboratory (ANL) for use with any of the ANL proposed superconducting (SC) half- or quarter-wave cavities for SARAF [1] and Project-X [2]. The 50 Ohm coaxial capacitive coupler is required to operate in the CW regime with up to 15 kW of forward power and under any condition for the reflected power. A key feature is a moveable copper plated stainless steel bellows which will permit up to 3 cm of axial stroke and adjustment of the external quality factor by roughly one order of magnitude in the range of 105 to 106. The mechanical and vacuum design includes two ceramic windows, one operating at room temperature and another at 70 Kelvin. The two window design allows the portion of the coupler assembled onto the SC cavity in the clean room to be compact and readily cleanable. Other design features include thermal intercepts to provide a large margin for RF heating and a mechanical guide assembly to operate cold and under vacuum with high reliability.

  4. Design of the spoke cavity ED&D input coupler.

    SciTech Connect

    Schmierer, E. N.; Chan, K. D.; Gentzlinger, R.C.; Haynes, W. B.; Krawczyk, F. L.; Montoya, D. I.; Roybal, P. L.; Schrage, D. L.; Tajima, T.

    2001-01-01

    The current design of the Accelerator Driven Test Facility (ADTF) accelerator contains multiple {beta}, superconducting, resonant cavities. Spoke-type resonators ({beta} = 0.175 and {beta} = 0.34) are proposed for the low energy linac immediately following the radio frequency quadrupole. A continuous wave power requirement of 8.5 - 211.8 kW, 350 MHz has been established for the input couplers of these spoke cavities. The coupler design approach was to have a single input coupler design for beam currents of 13.3 mA and 100 mA and both cavity {beta}'s. The baseline design consists of a half-height WR2300 waveguide section merged with a shorted coaxial conductor. At the transition is a 4.8-mm thick cylindrical ceramic window creating the air/vacuum barrier. The coax is 103-mm inner diameter, 75 Ohm. The coax extends from the short through the waveguide and terminates with an antenna tip in the sidewall of the cavity. A full diameter pumping port is located in the quarter-wave stub to facilitate good vacuum. The coaxial geometry chosen was based on multipacting and thermal design considerations. The coupling coefficient is adjusted by statically adjusting the outer conductor length. The RF-physics, thermal, vacuum, and structural design considerations will be discussed in this paper, in addition to future room temperature testing plans.

  5. Efficient Coupler for a Bessel Beam Dispersive Element

    NASA Technical Reports Server (NTRS)

    Savchenkov, Anatoliy; Iltchenko, Vladimir; Matsko, Andrey; Le, Thanh; Yu, nan; Maleki, Lute

    2008-01-01

    A document discusses overcoming efficient optical coupling to high orbital momentum modes by slightly bending the taper dispersive element. This little shape distortion is not enough to scramble the modes, but it allows the use of regular, free-beam prism coupling, fiber coupling, or planar fiber on-chip coupling with, ultimately, 100 percent efficiency. The Bessel-beam waveguide is bent near the contact with the coupler, or a curved coupler is used. In this case, every Bessel-beam mode can be successfully coupled to a collimated Gaussian beam. Recently developed Bessel-beam waveguides allow long optical delay and very high dispersion. Delay values may vary from nanoseconds to microseconds, and dispersion promises to be at 100 s/nm. Optical setup consisted of a red laser, an anamorphic prism pair, two prism couplers, and a bent, single-mode fiber attached to prisms. The coupling rate increased substantially and corresponded to the value determined by the anamorphic prism pair.

  6. Capacitive Fundamental Power Coupler and Pickup for the 56 MHz SRF Cavity

    SciTech Connect

    Choi,E.; Hahn, H.

    2008-07-01

    The beam excited 56 MHz SRF cavity will have a power coupler for a fast frequency tuner. The calculation shows the coupling of the power coupler, {beta}{sub opt}, is around 50. Size and location of the power coupler are determined by measurements. Measurements are in good agreement with the simulation results. The axial location of the power coupler for the Nb cavity is limited by corrugations made on the cavity outer conductor for purpose of removing any multipacting. The preferred axial location is 14.5 cm away from the cavity gap start where a slow tuner plate will be. MWS simulations are done to determine the length of the power coupler inner conductor and pickup probe for the Nb cavity at the fixed axial location. Size and location of both the fundamental power coupler and the pickup probe can be decided from the simulation results.

  7. Inexpensive 3dB coupler for POF communication by injection-molding production

    NASA Astrophysics Data System (ADS)

    Haupt, M.; Fischer, U. H. P.

    2011-01-01

    POFs (polymer optical fibers) gradually replace traditional communication media such as copper and glass within short distance communication systems. Primarily, this is due to their cost-effectiveness and easy handling. POFs are used in various fields of optical communication, e.g. the automotive sector or in-house communication. So far, however, only a few key components for a POF communication network are available. Even basic components, such as splices and couplers, are fabricated manually. Therefore, these circumstances result in high costs and fluctuations in components' performance. Available couplers have high insertion losses due to their manufacturing method. This can only be compensated by higher power budgets. In order to produce couplers with higher performances new fabrication methods are indispensable. A cheap and effective way to produce couplers for POF communication systems is injection molding. The paper gives an overview of couplers available on market, compares their performances, and shows a way to produce couplers by means of injection molding.

  8. Parallel computers

    SciTech Connect

    Treveaven, P.

    1989-01-01

    This book presents an introduction to object-oriented, functional, and logic parallel computing on which the fifth generation of computer systems will be based. Coverage includes concepts for parallel computing languages, a parallel object-oriented system (DOOM) and its language (POOL), an object-oriented multilevel VLSI simulator using POOL, and implementation of lazy functional languages on parallel architectures.

  9. Design and fabrication of multimode interference couplers based on digital micro-mirror system

    NASA Astrophysics Data System (ADS)

    Wu, Sumei; He, Xingdao; Shen, Chenbo

    2008-03-01

    Multimode interference (MMI) couplers, based on the self-imaging effect (SIE), are accepted popularly in integrated optics. According to the importance of MMI devices, in this paper, we present a novel method to design and fabricate MMI couplers. A technology of maskless lithography to make MMI couplers based on a smart digital micro-mirror device (DMD) system is proposed. A 1×4 MMI device is designed as an example, which shows the present method is efficient and cost-effective.

  10. Challenges in building high performance geoscientific spatial data infrastructures

    NASA Astrophysics Data System (ADS)

    Dubros, Fabrice; Tellez-Arenas, Agnes; Boulahya, Faiza; Quique, Robin; Le Cozanne, Goneri; Aochi, Hideo

    2016-04-01

    One of the main challenges in Geosciences is to deal with both the huge amounts of data available nowadays and the increasing need for fast and accurate analysis. On one hand, computer aided decision support systems remain a major tool for quick assessment of natural hazards and disasters. High performance computing lies at the heart of such systems by providing the required processing capabilities for large three-dimensional time-dependent datasets. On the other hand, information from Earth observation systems at different scales is routinely collected to improve the reliability of numerical models. Therefore, various efforts have been devoted to design scalable architectures dedicated to the management of these data sets (Copernicus, EarthCube, EPOS). Indeed, standard data architectures suffer from a lack of control over data movement. This situation prevents the efficient exploitation of parallel computing architectures as the cost for data movement has become dominant. In this work, we introduce a scalable architecture that relies on high performance components. We discuss several issues such as three-dimensional data management, complex scientific workflows and the integration of high performance computing infrastructures. We illustrate the use of such architectures, mainly using off-the-shelf components, in the framework of both coastal flooding assessments and earthquake early warning systems.

  11. A compilation system that integrates high performance Fortran and Fortran M

    SciTech Connect

    Foster, I.; Xu, Ming; Avalani, B.; Choudhary, A.

    1994-06-01

    Task parallelism and data parallelism are often seen as mutually exclusive approaches to parallel programming. Yet there are important classes of application, for example in multidisciplinary simulation and command and control, that would benefit from an integration of the two approaches. In this paper, we describe a programming system that we are developing to explore this sort of integration. This system builds on previous work on task-parallel and data-parallel Fortran compilers to provide an environment in which the task-parallel language Fortran M can be used to coordinate data-parallel High Performance Fortran tasks. We use an image-processing problem to illustrate the issues that arise when building an integrated compilation system of this sort.

  12. Dinosaurs can fly -- High performance refining

    SciTech Connect

    Treat, J.E.

    1995-09-01

    High performance refining requires that one develop a winning strategy based on a clear understanding of one`s position in one`s company`s value chain; one`s competitive position in the products markets one serves; and the most likely drivers and direction of future market forces. The author discussed all three points, then described measuring performance of the company. To become a true high performance refiner often involves redesigning the organization as well as the business processes. The author discusses such redesigning. The paper summarizes ten rules to follow to achieve high performance: listen to the market; optimize; organize around asset or area teams; trust the operators; stay flexible; source strategically; all maintenance is not equal; energy is not free; build project discipline; and measure and reward performance. The paper then discusses the constraints to the implementation of change.

  13. Strategy Guideline. Partnering for High Performance Homes

    SciTech Connect

    Prahl, Duncan

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  14. System analysis of high performance MHD systems

    SciTech Connect

    Chang, S.L.; Berry, G.F.; Hu, N.

    1988-01-01

    This paper presents the results of an investigation on the upper ranges of performance that an MHD power plant using advanced technology assumptions might achieve and a parametric study on the key variables affecting this high performance. To simulate a high performance MHD power plant and conduct a parametric study, the Systems Analysis Language Translator (SALT) code developed at Argonne National Laboratory was used. The parametric study results indicate that the overall efficiency of an MHD power plant can be further increased subject to the improvement of some key variables such as, the MHD generator inverter efficiency, channel electrical loading factor, magnetic field strength, preheated air temperature, and combustor heat loss. In an optimization calculation, the simulated high performance MHD power plant using advanced technology assumptions can attain an ultra high overall efficiency, exceeding 62%. 12 refs., 5 figs., 4 tabs.

  15. Opportunities and challenges of high-performance computing in chemistry

    SciTech Connect

    Guest, M.F.; Kendall, R.A.; Nichols, J.A.

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  16. High Performance, Three-Dimensional Bilateral Filtering

    SciTech Connect

    Bethel, E. Wes

    2008-06-05

    Image smoothing is a fundamental operation in computer vision and image processing. This work has two main thrusts: (1) implementation of a bilateral filter suitable for use in smoothing, or denoising, 3D volumetric data; (2) implementation of the 3D bilateral filter in three different parallelization models, along with parallel performance studies on two modern HPC architectures. Our bilateral filter formulation is based upon the work of Tomasi [11], but extended to 3D for use on volumetric data. Our three parallel implementations use POSIX threads, the Message Passing Interface (MPI), and Unified Parallel C (UPC), a Partitioned Global Address Space (PGAS) language. Our parallel performance studies, which were conducted on a Cray XT4 supercomputer and aquad-socket, quad-core Opteron workstation, show our algorithm to have near-perfect scalability up to 120 processors. Parallel algorithms, such as the one we present here, will have an increasingly important role for use in production visual analysis systems as the underlying computational platforms transition from single- to multi-core architectures in the future.

  17. Using LEADS to shift to high performance.

    PubMed

    Fenwick, Shauna; Hagge, Erna

    2016-03-01

    Health systems across Canada are tasked to measure results of all their strategic initiatives. Included in most strategic plans is leadership development. How to measure leadership effectiveness in relation to organizational objectives is key in determining organizational effectiveness. The following findings offer considerations for a 21(st)-century approach to shifting to high-performance systems.

  18. Performance, Performance System, and High Performance System

    ERIC Educational Resources Information Center

    Jang, Hwan Young

    2009-01-01

    This article proposes needed transitions in the field of human performance technology. The following three transitions are discussed: transitioning from training to performance, transitioning from performance to performance system, and transitioning from learning organization to high performance system. A proposed framework that comprises…

  19. Team Development for High Performance Management.

    ERIC Educational Resources Information Center

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  20. Overview of high performance aircraft propulsion research

    NASA Technical Reports Server (NTRS)

    Biesiadny, Thomas J.

    1992-01-01

    The overall scope of the NASA Lewis High Performance Aircraft Propulsion Research Program is presented. High performance fighter aircraft of interest include supersonic flights with such capabilities as short take off and vertical landing (STOVL) and/or high maneuverability. The NASA Lewis effort involving STOVL propulsion systems is focused primarily on component-level experimental and analytical research. The high-maneuverability portion of this effort, called the High Alpha Technology Program (HATP), is part of a cooperative program among NASA's Lewis, Langley, Ames, and Dryden facilities. The overall objective of the NASA Inlet Experiments portion of the HATP, which NASA Lewis leads, is to develop and enhance inlet technology that will ensure high performance and stability of the propulsion system during aircraft maneuvers at high angles of attack. To accomplish this objective, both wind-tunnel and flight experiments are used to obtain steady-state and dynamic data, and computational fluid dynamics (CFD) codes are used for analyses. This overview of the High Performance Aircraft Propulsion Research Program includes a sampling of the results obtained thus far and plans for the future.

  1. Project materials [Commercial High Performance Buildings Project

    SciTech Connect

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  2. High Performance Builder Spotlight: Imagine Homes

    SciTech Connect

    2011-01-01

    Imagine Homes, working with the DOE's Building America research team member IBACOS, has developed a system that can be replicated by other contractors to build affordable, high-performance homes. Imagine Homes has used the system to produce more than 70 Builders Challenge-certified homes per year in San Antonio over the past five years.

  3. Commercial Buildings High Performance Rooftop Unit Challenge

    SciTech Connect

    2011-12-16

    The U.S. Department of Energy (DOE) and the Commercial Building Energy Alliances (CBEAs) are releasing a new design specification for high performance rooftop air conditioning units (RTUs). Manufacturers who develop RTUs based on this new specification will find strong interest from the commercial sector due to the energy and financial savings.

  4. High Performance Networks for High Impact Science

    SciTech Connect

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  5. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  6. Debugging a high performance computing program

    DOEpatents

    Gooding, Thomas M.

    2014-08-19

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  7. Co-design for High Performance Computing

    NASA Astrophysics Data System (ADS)

    Rodrigues, Arun; Dosanjh, Sudip; Hemmert, Scott

    2010-09-01

    Co-design has been identified as a key strategy for achieving Exascale computing in this decade. This paper describes the need for co-design in High Performance Computing related research in embedded computing the development of hardware/software co-simulation methods.

  8. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United States.…

  9. High Performance Work Systems for Online Education

    ERIC Educational Resources Information Center

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  10. Scalable parallel solution coupling for multi-physics reactor simulation.

    SciTech Connect

    Tautges, T. J.; Caceres, A.; Mathematics and Computer Science

    2009-01-01

    Reactor simulation depends on the coupled solution of various physics types, including neutronics, thermal/hydraulics, and structural mechanics. This paper describes the formulation and implementation of a parallel solution coupling capability being developed for reactor simulation. The coupling process consists of mesh and coupler initialization, point location, field interpolation, and field normalization. We report here our test of this capability on an example problem, namely, a reflector assembly from an advanced burner test reactor. Performance of this coupler in parallel is reasonable for the chosen problem size and range of processor counts. The runtime is dominated by startup costs, which amortize over the entire coupled simulation. Future efforts will include adding more sophisticated interpolation and normalization methods, to accommodate different numerical solvers used in various physics modules and to obtain better conservation properties for certain field types.

  11. Idle waves in high-performance computing.

    PubMed

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  12. RISC Processors and High Performance Computing

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  13. Monitoring SLAC High Performance UNIX Computing Systems

    SciTech Connect

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  14. High performance microsystem packaging: A perspective

    SciTech Connect

    Romig, A.D. Jr.; Dressendorfer, P.V.; Palmer, D.W.

    1997-10-01

    The second silicon revolution will be based on intelligent, integrated microsystems where multiple technologies (such as analog, digital, memory, sensor, micro-electro-mechanical, and communication devices) are integrated onto a single chip or within a multichip module. A necessary element for such systems is cost-effective, high-performance packaging. This paper examines many of the issues associated with the packaging of integrated microsystems, with an emphasis on the areas of packaging design, manufacturability, and reliability.

  15. The phase of the coupling effect on entanglement decay in the nonlinear coupler system

    NASA Astrophysics Data System (ADS)

    Kowalewska-Kudłaszyk, A.; Leoński, W.

    2010-09-01

    We consider the influence of the phase of internal coupling within a nonlinear Kerr-like coupler on entanglement decay. Assuming that the coupler interacts with an external reservoir, we show that by changing this phase we can obtain qualitatively various types of entanglement decay. In consequence, we are able to control the occurrence of sudden death and sudden birth phenomena.

  16. Effects of beam focusing on the efficiency of planar waveguide grating couplers

    NASA Technical Reports Server (NTRS)

    Li, Lifeng; Gupta, Mool C.

    1991-01-01

    Results of a theoretical and experimental study into the variation of coupling efficiency with a grating angle are presented for various beam focusing conditions for an integrated optical grating coupler. The study shows that the acceptance angle of the grating coupler can be broadened within a relatively large range and with a relatively small loss of coupling efficiency, by focusing the incident laser beam.

  17. Achieving High Performance Perovskite Solar Cells

    NASA Astrophysics Data System (ADS)

    Yang, Yang

    2015-03-01

    Recently, metal halide perovskite based solar cell with the characteristics of rather low raw materials cost, great potential for simple process and scalable production, and extreme high power conversion efficiency (PCE), have been highlighted as one of the most competitive technologies for next generation thin film photovoltaic (PV). In UCLA, we have realized an efficient pathway to achieve high performance pervoskite solar cells, where the findings are beneficial to this unique materials/devices system. Our recent progress lies in perovskite film formation, defect passivation, transport materials design, interface engineering with respect to high performance solar cell, as well as the exploration of its applications beyond photovoltaics. These achievements include: 1) development of vapor assisted solution process (VASP) and moisture assisted solution process, which produces perovskite film with improved conformity, high crystallinity, reduced recombination rate, and the resulting high performance; 2) examination of the defects property of perovskite materials, and demonstration of a self-induced passivation approach to reduce carrier recombination; 3) interface engineering based on design of the carrier transport materials and the electrodes, in combination with high quality perovskite film, which delivers 15 ~ 20% PCEs; 4) a novel integration of bulk heterojunction to perovskite solar cell to achieve better light harvest; 5) fabrication of inverted solar cell device with high efficiency and flexibility and 6) exploration the application of perovskite materials to photodetector. Further development in film, device architecture, and interfaces will lead to continuous improved perovskite solar cells and other organic-inorganic hybrid optoelectronics.

  18. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    SciTech Connect

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-04-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability.

  19. Programming high-performance reconfigurable computers

    NASA Astrophysics Data System (ADS)

    Smith, Melissa C.; Peterson, Gregory D.

    2001-07-01

    High Performance Computers (HPC) provide dramatically improved capabilities for a number of defense and commercial applications, but often are too expensive to acquire and to program. The smaller market and customized nature of HPC architectures combine to increase the cost of most such platforms. To address the problems with high hardware costs, one may create more inexpensive Beowolf clusters of dedicated commodity processors. Despite the benefit of reduced hardware costs, programming the HPC platforms to achieve high performance often proves extremely time-consuming and expensive in practice. In recent years, programming productivity gains come from the development of common APIs and libraries of functions to support distributed applications. Examples include PVM, MPI, BLAS, and VSIPL. The implementation of each API or library is optimized for a given platform, but application developers can write code that is portable across specific HPC architectures. The application of reconfigurable computing (RC) into HPC platforms promises significantly enhanced performance and flexibility at a modest cost. Unfortunately, configuring (programming) the reconfigurable computing nodes remains a challenging task and relatively little work to date has focused on potential high performance reconfigurable computing (HPRC) platforms consisting of reconfigurable nodes paired with processing nodes. This paper addresses the challenge of effectively exploiting HPRC resources by first considering the performance evaluation and optimization problem before turning to improving the programming infrastructure used for porting applications to HPRC platforms.

  20. Computational Biology and High Performance Computing 2000

    SciTech Connect

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  1. Green Schools as High Performance Learning Facilities

    ERIC Educational Resources Information Center

    Gordon, Douglas E.

    2010-01-01

    In practice, a green school is the physical result of a consensus process of planning, design, and construction that takes into account a building's performance over its entire 50- to 60-year life cycle. The main focus of the process is to reinforce optimal learning, a goal very much in keeping with the parallel goals of resource efficiency and…

  2. Precision measurement system and analysis of low core signal loss in DCF couplers

    NASA Astrophysics Data System (ADS)

    Yan, P.; Wang, X. J.; Fu, Ch; Li, D.; Sun, J. Y.; Gong, M. L.; Xiao, Q. R.

    2016-07-01

    In order to achieve higher output power of double cladding fiber lasers, low signal loss has become a focus in researches on optical technology, especially double-clad fiber (DCF) couplers. According to the analysis, DCF couplers with low core signal loss (less than 1%) are produced. To obtain higher precision, we use the first-proposed method for core signal transfer efficiency measurement based on the fiber propagation field image processing. To the best of our knowledge, we report, for the first time, the results of the core signal loss less than 1% in DCF coupler measured by our measurement with high stability and relative precision. The measurement values can assess the quality of DCF couplers and be used as a signal to suggest the improvement on the processing technology of our self-made DCF couplers.

  3. Measurements of ICRF (ion cyclotron range of frequencies) loading with a ridged waveguide coupler on PLT

    SciTech Connect

    Greene, G.J.; Wilson, J.R.; Colestock, P.L.; Fortgang, C.M.; Hosea, J.C.; Hwang, D.Q.; Nagy, A.

    1987-11-01

    An ICRF ridged waveguide coupler has been installed on PLT for measurements of plasma loading. The coupler was partially filled with TiO/sub 2/ dielectric in order to sufficiently lower the cutoff frequency and utilized a tapered ridge for improved matching. Vacuum field measurements indicated a single propagating mode in the coupler and emphasized the importance of considering the fringing fields at the mouth of the waveguide. Low power experiments were carried out at 72.6 and 95.0 MHz without any external impedance matching network. Plasma loading increased rapidly as the face of the coupler approached the plasma, and, at fixed position, increased with line-averaged plasma density. At the lower frequency, the reflection coefficient exhibited a minimum (<8%) at a particular coupler position. At both frequencies, measurements indicated efficient power coupling to the plasma. Magnetic probe signals showed evidence of dense eigenmodes suggesting excitation of the fast wave. 24 refs., 13 figs.

  4. Broadly tunable multiwavelength Brillouin-erbium fiber laser using a twin-core fiber coupler

    NASA Astrophysics Data System (ADS)

    Peng, Wanjing; Yan, Fengping; Li, Qi; Liu, Shuo; Tan, Siyu; Feng, Suchun; Feng, Ting

    2014-07-01

    A tunable multiwavelength Brillouin-erbium fiber laser (MW-BEFL) using a twin-core fiber (TCF) coupler is proposed and demonstrated. The TCF coupler is formed by splicing a section of TCF between two single-mode fibers. By simply applying bending curvature on the TCF coupler, the peak net gain is shifted close to the Brillouin pump (BP), which has advantage for suppressing self-lasing cavity modes with low-BP-power injection. In this work, the dependency of the Stokes signals tuning range on the free spectral range (FSR) of TCF coupler is studied. It is also found that the tuning range of MW-BEFL can exceed the FSR of TCF coupler by adopting proper BP power and 980-nm pump power. Up to 40 nm tuning range of MW-BEFL in the absence of self-lasing cavity modes is achieved.

  5. Integrated in-fiber coupler for microsphere whispering-gallery modes resonator excitation.

    PubMed

    Wang, Ruohui; Fraser, Michael; Li, Jiacheng; Qiao, Xueguang; Wang, Anbo

    2015-02-01

    We present an integrated in-fiber coupler for excitation of whispering-gallery modes of a microsphere resonator. The coupler is simply fabricated by chemical etching away the holey area of a photonic crystal fiber, leaving a freestanding solid core enclosed in a silica housing. Light is coupled into a microsphere through the suspended core with a diameter of 2.1 μm. Since the coupler itself performs as a Fabry-Perot interferometer, asymmetric Fano resonances can be observed in the mixed reflection spectrum. The silica housing of the coupler provides a robust mechanical support to the microsphere resonator. The new Fano resonance coupler shows great potential in biochemical sensing and optical switching applications. PMID:25680034

  6. A high performance scientific cloud computing environment for materials simulations

    NASA Astrophysics Data System (ADS)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  7. High performance capacitors using nano-structure multilayer materials fabrication

    DOEpatents

    Barbee, T.W. Jr.; Johnson, G.W.; O`Brien, D.W.

    1995-05-09

    A high performance capacitor is fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a ``notepad`` configuration composed of 200-300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The notepad capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density. 5 figs.

  8. High performance capacitors using nano-structure multilayer materials fabrication

    DOEpatents

    Barbee, Jr., Troy W.; Johnson, Gary W.; O'Brien, Dennis W.

    1996-01-01

    A high performance capacitor fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a "notepad" configuration composed of 200-300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The "notepad" capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density.

  9. High performance capacitors using nano-structure multilayer materials fabrication

    DOEpatents

    Barbee, Jr., Troy W.; Johnson, Gary W.; O'Brien, Dennis W.

    1995-01-01

    A high performance capacitor fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a "notepad" configuration composed of 200-300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The "notepad" capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density.

  10. High performance capacitors using nano-structure multilayer materials fabrication

    DOEpatents

    Barbee, T.W. Jr.; Johnson, G.W.; O`Brien, D.W.

    1996-01-23

    A high performance capacitor is described which is fabricated from nano-structure multilayer materials, such as by controlled, reactive sputtering, and having very high energy-density, high specific energy and high voltage breakdown. The multilayer capacitors, for example, may be fabricated in a ``notepad`` configuration composed of 200--300 alternating layers of conductive and dielectric materials so as to have a thickness of 1 mm, width of 200 mm, and length of 300 mm, with terminals at each end of the layers suitable for brazing, thereby guaranteeing low contact resistance and high durability. The ``notepad`` capacitors may be stacked in single or multiple rows (series-parallel banks) to increase the voltage and energy density. 5 figs.

  11. High Performance Multivariate Visual Data Exploration for Extremely Large Data

    SciTech Connect

    Rubel, Oliver; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes; Prabhat,

    2008-08-22

    One of the central challenges in modern science is the need to quickly derive knowledge and understanding from large, complex collections of data. We present a new approach that deals with this challenge by combining and extending techniques from high performance visual data analysis and scientific data management. This approach is demonstrated within the context of gaining insight from complex, time-varying datasets produced by a laser wakefield accelerator simulation. Our approach leverages histogram-based parallel coordinates for both visual information display as well as a vehicle for guiding a data mining operation. Data extraction and subsetting are implemented with state-of-the-art index/query technology. This approach, while applied here to accelerator science, is generally applicable to a broad set of science applications, and is implemented in a production-quality visual data analysis infrastructure. We conduct a detailed performance analysis and demonstrate good scalability on a distributed memory Cray XT4 system.

  12. Stabilizing mechanism and running behavior of couplers on heavy haul trains

    NASA Astrophysics Data System (ADS)

    Xu, Ziqiang; Wu, Qing; Luo, Shihui; Ma, Weihua; Dong, Xiaoqing

    2014-11-01

    Published studies in regard to coupler systems have been mainly focused on the manufacturing process or coupler strength issues. With the ever increasing of tonnage and length of heavy haul trains, lateral in-train forces generated by longitudinal in-train forces and coupler rotations have become a more and more significant safety issue for heavy haul train operations. Derailments caused by excessive lateral in-train forces are frequently reported. This article studies two typical coupler systems used on heavy haul locomotives. Their structures and stabilizing mechanism are analyzed before the corresponding models are developed. Coupler systems models are featured by two distinct stabilizing mechanism models and draft gear models with hysteresis considered. A model set which consists of four locomotives and three coupler systems is developed to study the rotational behavior of different coupler systems and their implications for locomotive dynamics. Simulated results indicate that when the locomotives are equipped with the type B coupler system, locomotives can meet the dynamics standard on tangent tracks; while the dynamics performance on curved tracks is very poor. The maximum longitudinal in-train force for locomotives equipped with the type B coupler system is 2000 kN. Simulations revealed a distinct trend for the type A coupler system. Locomotive dynamics are poorer for the type A case when locomotives are running on tangent tracks, while the dynamics are better for the type A case when locomotives are running on curved tracks. Theoretical studies and simulations carried out in this article suggest that a combination of the two types of stabilizing mechanism can result in a good design which can significantly decrease the relevant derailments.

  13. The impacts of ageing effects due to radiation burden on optical fiber couplers

    NASA Astrophysics Data System (ADS)

    Perecar, F.; Marcinka, O.; Bednarek, L.; Lucki, M.; Liner, A.; Hajek, L.; Papes, M.; Jaros, J.; Vasinek, V.

    2015-08-01

    The paper discuss about accelerated ageing of optical fiber elements in their burdened with gamma radiation. In addition to the destruction of coating materials, gamma radiation has its effect on the internal structure of the optical fiber. It is necessary to specify the changes in the optical coupler and find out why these changes occur. This article contains experimental measurement of the impact of gamma radiation Cobalt-60 on the optical couplers of various split performance ratio. The couplers were exposed to gradually increasing doses of 60Co. Measurements are focused on the overall distribution of the energy in the core and cladding various branches of SM optical fiber couplers. This article focuses on applied research and experimental development of resources for safety operation of optical networks since monitoring of ageing substantially contributes to its security. It addresses issues of accelerated ageing of optical fiber elements in their burdened with gamma radiation. How does radiation energy of gamma radiation influence optical network elements? This effect is explored just very little bit and is yet another unanswered question. In addition to the destruction of coating materials, gamma radiation has its effect on the internal structure of the optical fiber. It is necessary to specify the changes in the optical coupler and find out why these changes occur. This article contains experimental measurement of the impact of gamma radiation Cobalt-60 on the optical couplers of various split performance ratio. Optical passive components, couplers, were exposed to gradually increasing doses of 60Co. Measurements are focused on the overall distribution of the energy of LP01 mode in the core and cladding various branches of SM optical fiber couplers. Graphical and mathematical detect changes in the dissemination of energy coupler after single doses of gamma radiation are useful to understand the phenomenon of accelerated ageing elements of optical networks in

  14. Sound pressure in insert earphone couplers and real ears.

    PubMed

    Burkhard, M D; Sachs, R M

    1977-12-01

    It is known that sound pressure, measured in couplers via a probe-tube microphone, often shows a pressure vs frequency response that drops sharply at a single frequency. In this study sound pressure was theoretically determined at various locations within a hard-walled cylindrical cavity, driven by a constant-volume velocity source with circular symmetry. At each location in the volume, a transfer impedance was defined as the ratio of pressure to inlet-volume velocity. In the region around the inlet, the transfer impedance passes through zero as it changes from negative to positive reactance with increasing frequency. Two hard-walled cavity examples were examined in detail (1) the main cavity of a 2-cm3 HA-2 coupler, and (2) a cavity having dimensions approximately equal to the occluded ear canal between an ear-mold tip and the eardrum. Contours of constant minimum sound pressure vs frequency are given for these two cylindrical volumes with experimental verification. Implications for probe microphone calibration and measurement of sound pressure in ears are discussed. PMID:604691

  15. Supersymmetry-Inspired Non-Hermitian Optical Couplers

    NASA Astrophysics Data System (ADS)

    Principe, Maria; Castaldi, Giuseppe; Consales, Marco; Cusano, Andrea; Galdi, Vincenzo

    2015-02-01

    Supersymmetry has been shown to provide a systematic and effective framework for generating classes of isospectral optical structures featuring perfectly-phase-matched modes, with the exception of one (fundamental) mode which can be removed. More recently, this approach has been extended to non-Hermitian scenarios characterized by spatially-modulated distributions of optical loss and gain, in order to allow the removal of higher-order modes as well. In this paper, we apply this approach to the design of non-Hermitian optical couplers with higher-order mode-selection functionalities, with potential applications to mode-division multiplexing in optical links. In particular, we highlight the critical role of the coupling between non-Hermitian optical waveguides, which generally induces a phase transition to a complex eigenspectrum, thereby hindering the targeted mode-selection functionality. With the specific example of an optical coupler that selects the second-order mode of a given waveguide, we illustrate the aforementioned limitations and propose possible strategies to overcome them, bearing in mind the practical feasibility of the gain levels required.

  16. Coupler-free transition from light to surface plasmon polariton

    NASA Astrophysics Data System (ADS)

    Du, Chunguang; Jing, Qingli; Hu, Zhengfeng

    2015-01-01

    It is widely believed that surface plasmon polaritons (SPPs) at flat metal surfaces cannot be directly excited by free-space lights due to momentum mismatch. Here we propose a way to resonantly excite SPPs by lights without using any coupler and propose a kind of surface-plasmon-resonance (SPR) system which can be simply composed by a metal film and a bottom medium whose real part of permittivity is less than unity. A light, which is incident from vacuum or air to the upper surface of the metal film, can directly excite a SPP at the lower surface of the metal film. Some special media can be selected as the bottom medium, including atomic vapor of electromagnetically induced transparency (EIT) and some special solid media. In the atomic EIT case, the steep dispersion at the transparent window can lead to interesting phenomena; e.g., very slow directional moving of the EIT atoms can lead to unidirectional SPPs (reverse-propagating SPP modes do not exist) and therefore lead to a highly asymmetric SPR angle spectrum which is very sensitive to the velocity of the atoms. We also present feasible and simple designs of all-solid-state coupler-free SPR systems, which are simply composed by a titanium film and a substrate of amorphous aluminum oxide or silicon oxide.

  17. RF coupler for high-power CW FEL photoinjector

    SciTech Connect

    Kurennoy, S.; Young, L. M.

    2003-01-01

    A high-current emittance-compensated RF photoinjector is a key enabling technology for a high-power CW FEL. The design presently under way is a 100-mA 2.5-cell {pi}-mode, 700-MHz, normal conducting demonstration CW RF photoinjector. This photoinjector will be capable of accelerating 3 nC per bunch with an emittance at the wiggler less than 10 mm-mrad. The paper presents results for the RF coupling from ridged wave guides to hte photoinjector RF cavity. The LEDA and SNS couplers inspired this 'dog-bone' design. Electromagnetic modeling of the coupler-cavity system has been performed using both 2-D and 3-D frequency-domain calculations, and a novel time-domain approach with MicroWave Studio. These simulations were used to adjust the coupling coefficient and calculate the power-loss distribution on the coupling slot. The cooling of this slot is a rather challenging thermal management project.

  18. Graphene-based terahertz tunable plasmonic directional coupler

    SciTech Connect

    He, Meng-Dong Wang, Kai-Jun; Wang, Lei; Li, Jian-Bo; Liu, Jian-Qiang; Huang, Zhen-Rong; Wang, Lingling; Wang, Lin; Hu, Wei-Da; Chen, Xiaoshuang

    2014-08-25

    We propose and numerically analyze a terahertz tunable plasmonic directional coupler which is composed of a thin metal film with a nanoscale slit, dielectric grating, a graphene sheet, and a dielectric substrate. The slit is employed to generate surface plasmon polaritons (SPPs), and the metal-dielectric grating-graphene-dielectric constructs a Bragg reflector, whose bandgap can be tuned over a wide frequency range by a small change in the Fermi energy level of graphene. As a graphene-based Bragg reflector is formed on one side of the slit, the structure enables SPP waves to be unidirectionally excited on the other side of the slit due to SPP interference, and the SPP waves in the Bragg reflector can be efficiently switched on and off by tuning the graphene's Fermi energy level. By introducing two optimized graphene-based Bragg reflectors into opposite sides of the slit, SPP waves can be guided to different Bragg reflectors at different Fermi energy levels, thus achieving a tunable bidirectional coupler.

  19. Supersymmetry-Inspired Non-Hermitian Optical Couplers

    PubMed Central

    Principe, Maria; Castaldi, Giuseppe; Consales, Marco; Cusano, Andrea; Galdi, Vincenzo

    2015-01-01

    Supersymmetry has been shown to provide a systematic and effective framework for generating classes of isospectral optical structures featuring perfectly-phase-matched modes, with the exception of one (fundamental) mode which can be removed. More recently, this approach has been extended to non-Hermitian scenarios characterized by spatially-modulated distributions of optical loss and gain, in order to allow the removal of higher-order modes as well. In this paper, we apply this approach to the design of non-Hermitian optical couplers with higher-order mode-selection functionalities, with potential applications to mode-division multiplexing in optical links. In particular, we highlight the critical role of the coupling between non-Hermitian optical waveguides, which generally induces a phase transition to a complex eigenspectrum, thereby hindering the targeted mode-selection functionality. With the specific example of an optical coupler that selects the second-order mode of a given waveguide, we illustrate the aforementioned limitations and propose possible strategies to overcome them, bearing in mind the practical feasibility of the gain levels required. PMID:25708887

  20. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  1. High-performance software MPEG video player for PCs

    NASA Astrophysics Data System (ADS)

    Eckart, Stefan

    1995-04-01

    This presentation describes the implementation of the video part of a high performance software MPEG player for PCs, capable of decoding both video and audio in real-time on a 90 MHz Pentium system. The basic program design concepts, the methods to achieve high performance, the quality versus speed trade-offs employed by the program, and performance figures, showing the contribution of the different decoding steps to the total computational effort, are presented. Several decoding stages work on up to four data words in parallel by splitting the 32 bit ALU into four virtual 8 bit ALUs. Care had to be taken to avoid arithmetic overflow in these stages. The 8 X 8 inverse DCT is based on a table driven symmetric forward-mapped algorithm which splits the IDCT into four 4 X 4 DCTs. In addition, the IDCT has been combined with the inverse quantization into a single computational step. The display process uses a fast 4 X 4 ordered dither algorithm in YUV space to quantize the 24 bit 4:2:0 YUV output of the decoder to the 8 bit color lookup table hardware of the PC.

  2. High performance computing: Clusters, constellations, MPPs, and future directions

    SciTech Connect

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  3. USING MULTIRAIL NETWORKS IN HIGH-PERFORMANCE CLUSTERS

    SciTech Connect

    Coll, S.; Fratchtenberg, E.; Petrini, F.; Hoisie, A.; Gurvits, L.

    2001-01-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault tolerance of current high-performance clusters. We present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. We show that striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load, and allocation scheme. The compared methods include a basic round-robin rail allocation, a local-dynamic allocation based on local knowledge, and a dynamic rail allocation that reserves both communication endpoints of a message before sending it. The last method is shown to perform better than the others at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes. In addition we propose a hybrid algorithm that combines the benefits of the local-dynamic for short messages with those of the dynamic algorithm for large messages. Keywords: Communication Protocols, High-Performance Interconnection Networks, Performance Evaluation, Routing, Communication Libraries, Parallel Architectures.

  4. Implementing dynamic arrays: a challenge for high-performance machines

    SciTech Connect

    Mago, G.; Partain, W.

    1986-01-01

    There is an increasing need for high-performance AI machines. What is unusual about AI is that its programs are typically dynamic in the way their execution unfolds and in the data structures they use. AI therefore needs machines that are late-binding. Multiprocessors are often held out as the answer to AI's computing requirements. However, most success with multiprocessing has come from exploiting numerical computations' basic data structure - the static array (as in FORTRAN). A static array's structure does not change, so its elements (and the processing on them) may be readily distributed. In AI, the ability to change and manipulate the structure of data is paramount; hence, the pre-eminence of the LISP list. Unfortunately, the traditional pointer-based list has serious drawbacks for distributed processing. The dynamic array is a data structure that allows random access to its elements (like static arrays) yet whose structure - size and dimensions - can be easily changed, i.e., bound and re-bound at run - time. It combines the flexibility that AI requires with the potential for high performance through parallel operation. A machine's implementation of dynamic arrays gives a good insight into its potential usefulness for AI applications. Therefore, the authors outline the implementation of dynamic arrays on a machine that we are developing.

  5. Toward a theory of high performance.

    PubMed

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance. PMID:16028814

  6. Strategy Guideline. High Performance Residential Lighting

    SciTech Connect

    Holton, J.

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  7. High performance pitch-based carbon fiber

    SciTech Connect

    Tadokoro, Hiroyuki; Tsuji, Nobuyuki; Shibata, Hirotaka; Furuyama, Masatoshi

    1996-12-31

    The high performance pitch-based carbon fiber with smaller diameter, six micro in developed by Nippon Graphite Fiber Corporation. This fiber possesses high tensile modulus, high tensile strength, excellent yarn handle ability, low thermal expansion coefficient, and high thermal conductivity which make it an ideal material for space applications such as artificial satellites. Performance of this fiber as a reinforcement of composites was sufficient. With these characteristics, this pitch-based carbon fiber is expected to find wide variety of possible applications in space structures, industrial field, sporting goods and civil infrastructures.

  8. High-performance neural networks. [Neural computers

    SciTech Connect

    Dress, W.B.

    1987-06-01

    The new Forth hardware architectures offer an intermediate solution to high-performance neural networks while the theory and programming details of neural networks for synthetic intelligence are developed. This approach has been used successfully to determine the parameters and run the resulting network for a synthetic insect consisting of a 200-node ''brain'' with 1760 interconnections. Both the insect's environment and its sensor input have thus far been simulated. However, the frequency-coded nature of the Browning network allows easy replacement of the simulated sensors by real-world counterparts.

  9. High performance channel injection sealant invention abstract

    NASA Technical Reports Server (NTRS)

    Rosser, R. W.; Basiulis, D. I.; Salisbury, D. P. (Inventor)

    1982-01-01

    High performance channel sealant is based on NASA patented cyano and diamidoximine-terminated perfluoroalkylene ether prepolymers that are thermally condensed and cross linked. The sealant contains asbestos and, in its preferred embodiments, Lithofrax, to lower its thermal expansion coefficient and a phenolic metal deactivator. Extensive evaluation shows the sealant is extremely resistant to thermal degradation with an onset point of 280 C. The materials have a volatile content of 0.18%, excellent flexibility, and adherence properties, and fuel resistance. No corrosibility to aluminum or titanium was observed.

  10. An Introduction to High Performance Computing

    NASA Astrophysics Data System (ADS)

    Almeida, Sérgio

    2013-09-01

    High Performance Computing (HPC) has become an essential tool in every researcher's arsenal. Most research problems nowadays can be simulated, clarified or experimentally tested by using computational simulations. Researchers struggle with computational problems when they should be focusing on their research problems. Since most researchers have little-to-no knowledge in low-level computer science, they tend to look at computer programs as extensions of their minds and bodies instead of completely autonomous systems. Since computers do not work the same way as humans, the result is usually Low Performance Computing where HPC would be expected.

  11. High-Performance Water-Iodinating Cartridge

    NASA Technical Reports Server (NTRS)

    Sauer, Richard; Gibbons, Randall E.; Flanagan, David T.

    1993-01-01

    High-performance cartridge contains bed of crystalline iodine iodinates water to near saturation in single pass. Cartridge includes stainless-steel housing equipped with inlet and outlet for water. Bed of iodine crystals divided into layers by polytetrafluoroethylene baffles. Holes made in baffles and positioned to maximize length of flow path through layers of iodine crystals. Resulting concentration of iodine biocidal; suppresses growth of microbes in stored water or disinfects contaminated equipment. Cartridge resists corrosion and can be stored wet. Reused several times before necessary to refill with fresh iodine crystals.

  12. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development.

    ERIC Educational Resources Information Center

    Mills, Kim; Fox, Geoffrey

    1994-01-01

    Describes the InfoMall, a program led by the Northeast Parallel Architectures Center (NPAC) at Syracuse University (New York). The InfoMall features a partnership of approximately 24 organizations offering linked programs in High Performance Computing and Communications (HPCC) technology integration, software development, marketing, education and…

  13. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    ERIC Educational Resources Information Center

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  14. High Performance Compression of Science Data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Carpentieri, Bruno; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  15. High performance compression of science data

    NASA Technical Reports Server (NTRS)

    Storer, James A.; Cohn, Martin

    1994-01-01

    Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in the interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.

  16. Improving estimations of greenhouse gas transfer velocities by atmosphere-ocean couplers in Earth-System and regional models

    NASA Astrophysics Data System (ADS)

    Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.

    2015-09-01

    Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.

  17. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  18. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  19. Design and implementation of a high performance network security processor

    NASA Astrophysics Data System (ADS)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  20. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive

  1. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  2. High-performance vertical organic transistors.

    PubMed

    Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn

    2013-11-11

    Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. PMID:23637074

  3. Strategy Guideline: Partnering for High Performance Homes

    SciTech Connect

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  4. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. High performance stationary phases for planar chromatography.

    PubMed

    Poole, Salwa K; Poole, Colin F

    2011-05-13

    The kinetic performance of stabilized particle layers, particle membranes, and thin films for thin-layer chromatography is reviewed with a focus on how layer characteristics and experimental conditions affect the observed plate height. Forced flow and pressurized planar electrochromatography are identified as the best candidates to overcome the limited performance achieved by capillary flow for stabilized particle layers. For conventional and high performance plates band broadening is dominated by molecular diffusion at low mobile phase velocities typical of capillary flow systems and by mass transfer with a significant contribution from flow anisotropy at higher flow rates typical of forced flow systems. There are few possible changes to the structure of stabilized particle layers that would significantly improve their performance for capillary flow systems while for forced flow a number of avenues for further study are identified. New media for ultra thin-layer chromatography shows encouraging possibilities for miniaturized high performance systems but the realization of their true performance requires improvements in instrumentation for sample application and detection.

  6. High-performance computing for airborne applications

    SciTech Connect

    Quinn, Heather M; Manuzzato, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  7. Arteriopathy in the high-performance athlete.

    PubMed

    Takach, Thomas J; Kane, Peter N; Madjarov, Jeko M; Holleman, Jeremiah H; Nussbaum, Tzvi; Robicsek, Francis; Roush, Timothy S

    2006-01-01

    Pain occurs frequently in high-performance athletes and is most often due to musculoskeletal injury or strain. However, athletes who participate in sports that require highly frequent, repetitive limb motion can also experience pain from an underlying arteriopathy, which causes exercise-induced ischemia. We reviewed the clinical records and follow-up care of 3 high-performance athletes (mean age, 29.3 yr; range, 16-47 yr) who were admitted consecutively to our institution from January 2002 through May 2003, each with a diagnosis of limb ischemia due to arteriopathy. The study group comprised 3 males: 2 active in competitive baseball (ages, 16 and 19 yr) and a cyclist (age, 47 yr). Provocative testing and radiologic evaluation established the diagnoses. Treatment goals included targeted resection of compressive structures, arterial reconstruction to eliminate stenosis and possible emboli, and improvement of distal perfusion. Our successful reconstructive techniques included thoracic outlet decompression and interpositional bypass of the subclavian artery in the 16-year-old patient, pectoralis muscle and tendon decompression to relieve compression of the axillary artery in the 19-year-old, and patch angioplasty for endofibrosis affecting the external iliac artery in the 47-year-old. Each patient was asymptomatic on follow-up and had resumed participation in competitive athletics. The recognition and anatomic definition of an arteriopathy that produces exercise-induced ischemia enables the application of precise therapy that can produce a symptom-free outcome and the ability to resume competitive athletics.

  8. Fiber-optic couplers. January 1973-February 1988 (citations from the NTIS data base). Report for January 1973-February 1988

    SciTech Connect

    Not Available

    1988-03-01

    This bibliography contains citations concerning the design, fabrication, analysis, performance evaluation, and applications of fiber-optic couplers. Topics include optical coupling for fiber-optic transmission lines, frequency and wavelength division multiplexing, multiwavelength coupler-decouplers, single mode and multimode couplers, and fiber-optic gyroscope applications. Various types of couplers are examined including waveguide, star, access, duplex, data bus, passive, tee, and holographic. Patented fiber-optic devices using couplers are included. Citations concerning fiber-optic connectors are excluded and examined in a separate bibliography. (Contains 218 citations fully indexed and including a title list.)

  9. Analysis and Design of Three-Line Microstrip Couplers on Anisotropic Substrates

    NASA Astrophysics Data System (ADS)

    Yu, Lukang

    The behavior of guided modes and their associated properties of three-line microstrip couplers on anisotropic substrates are investigated in this dissertation. The wave propagation on these lines is described by three normal modes modified by the mutual coupling between the lines under the quasi-TEM assumption. The conditions for the equalization of the mode impedances and phase velocities, and the matching and isolation of the coupler ports are derived using the normal-mode characteristics and six-port scattering parameters. The use of dielectric anisotropy is suggested as a means to improve the coupler directivity degraded by the structural inhomogeneity. Analytical results based on the coupled-mode formulation are summarized. The Green's function for the microstrip configuration is derived to transform an anisotropic problem into an isotropic problem. The normal-mode characteristics of the coupler are determined from the structural capacitances. Effects due to strip -thickness, loss and dispersion are dealt with. The design equations for symmetrical lines are derived. The existence of an ideal six-port directional coupler is proved and the singular behavior of the normal-mode parameters occurring at the critical point of coupling-coefficient equalization is examined. Typical applications of three-line microstrip couplers in microwave integrated circuits are discussed with simulation results. Experimental results of a coupler fabricated on an Epsilam-10 substrate are presented for the verification of the theoretical findings.

  10. Multimode Directional Coupler for Utilization of Harmonic Frequencies from TWTAs

    NASA Technical Reports Server (NTRS)

    Simmons, Rainee N.; Wintucky, Edwin G.

    2013-01-01

    A novel waveguide multimode directional coupler (MDC) intended for the measurement and potential utilization of the second and higher order harmonic frequencies from high-power traveling wave tube amplifiers (TWTAs) has been successfully designed, fabricated, and tested. The design is based on the characteristic multiple propagation modes of the electrical and magnetic field components of electromagnetic waves in a rectangular waveguide. The purpose was to create a rugged, easily constructed, more efficient waveguide- based MDC for extraction and exploitation of the second harmonic signal from the RF output of high-power TWTs used for space communications. The application would be a satellitebased beacon source needed for Qband and V/W-band atmospheric propagation studies. The MDC could function as a CW narrow-band source or as a wideband source for study of atmospheric group delay effects on highdata- rate links. The MDC is fabricated from two sections of waveguide - a primary one for the fundamental frequency and a secondary waveguide for the second harmonic - that are joined together such that the second harmonic higher order modes are selectively coupled via precision- machined slots for propagation in the secondary waveguide. In the TWTA output waveguide port, both the fundamental and the second harmonic signals are present. These signals propagate in the output waveguide as the dominant and higher order modes, respectively. By including an appropriate mode selective waveguide directional coupler, such as the MDC presented here at the output of the TWTA, the power at the second harmonic can be sampled and amplified to the power level needed for atmospheric propagation studies. The important conclusions from the preliminary test results for the multimode directional coupler are: (1) the second harmonic (Ka-band) can be measured and effectively separated from the fundamental (Ku-band) with no coupling of the latter, (2) power losses in the fundamental frequency

  11. High-performance planar nanoscale dielectric capacitors

    NASA Astrophysics Data System (ADS)

    Özçelik, V. Ongun; Ciraci, S.

    2015-05-01

    We propose a model for planar nanoscale dielectric capacitors consisting of a single layer, insulating hexagonal boron nitride (BN) stripe placed between two metallic graphene stripes, all forming commensurately a single atomic plane. First-principles density functional calculations on these nanoscale capacitors for different levels of charging and different widths of graphene-BN stripes mark high gravimetric capacitance values, which are comparable to those of supercapacitors made from other carbon-based materials. Present nanocapacitor models allow the fabrication of series, parallel, and mixed combinations which offer potential applications in two-dimensional flexible nanoelectronics, energy storage, and heat-pressure sensing systems.

  12. How to create high-performing teams.

    PubMed

    Lam, Samuel M

    2010-02-01

    This article is intended to discuss inspirational aspects on how to lead a high-performance team. Cogent topics discussed include how to hire staff through methods of "topgrading" with reference to Geoff Smart and "getting the right people on the bus" referencing Jim Collins' work. In addition, once the staff is hired, this article covers how to separate the "eagles from the ducks" and how to inspire one's staff by creating the right culture with suggestions for further reading by Don Miguel Ruiz (The four agreements) and John Maxwell (21 Irrefutable laws of leadership). In addition, Simon Sinek's concept of "Start with Why" is elaborated to help a leader know what the core element should be with any superior culture. PMID:20127598

  13. High-Performance, Low Environmental Impact Refrigerants

    NASA Technical Reports Server (NTRS)

    McCullough, E. T.; Dhooge, P. M.; Glass, S. M.; Nimitz, J. S.

    2001-01-01

    Refrigerants used in process and facilities systems in the US include R-12, R-22, R-123, R-134a, R-404A, R-410A, R-500, and R-502. All but R-134a, R-404A, and R-410A contain ozone-depleting substances that will be phased out under the Montreal Protocol. Some of the substitutes do not perform as well as the refrigerants they are replacing, require new equipment, and have relatively high global warming potentials (GWPs). New refrigerants are needed that addresses environmental, safety, and performance issues simultaneously. In efforts sponsored by Ikon Corporation, NASA Kennedy Space Center (KSC), and the US Environmental Protection Agency (EPA), ETEC has developed and tested a new class of refrigerants, the Ikon (registered) refrigerants, based on iodofluorocarbons (IFCs). These refrigerants are nonflammable, have essentially zero ozone-depletion potential (ODP), low GWP, high performance (energy efficiency and capacity), and can be dropped into much existing equipment.

  14. High performance stepper motors for space mechanisms

    NASA Technical Reports Server (NTRS)

    Sega, Patrick; Estevenon, Christine

    1995-01-01

    Hybrid stepper motors are very well adapted to high performance space mechanisms. They are very simple to operate and are often used for accurate positioning and for smooth rotations. In order to fulfill these requirements, the motor torque, its harmonic content, and the magnetic parasitic torque have to be properly designed. Only finite element computations can provide enough accuracy to determine the toothed structures' magnetic permeance, whose derivative function leads to the torque. It is then possible to design motors with a maximum torque capability or with the most reduced torque harmonic content (less than 3 percent of fundamental). These later motors are dedicated to applications where a microstep or a synchronous mode is selected for minimal dynamic disturbances. In every case, the capability to convert electrical power into torque is much higher than on DC brushless motors.

  15. [High-performance society and doping].

    PubMed

    Gallien, C L

    2002-09-01

    Doping is not limited to high-level athletes. Likewise it is not limited to the field of sports activities. The doping phenomenon observed in sports actually reveals an underlying question concerning the notion of sports itself, and more widely, the society's conception of sports. In a high-performance society, which is also a high-risk society, doping behavior is observed in a large number of persons who may or may not participate in sports activities. The motivation is the search for individual success or profit. The fight against doping must therefore focus on individual responsibility and prevention in order to preserve athlete's health and maintain the ethical and educational value of sports activities.

  16. High-performance capillary electrophoresis of histones

    SciTech Connect

    Gurley, L.R.; London, J.E.; Valdez, J.G.

    1991-01-01

    A high performance capillary electrophoresis (HPCE) system has been developed for the fractionation of histones. This system involves electroinjection of the sample and electrophoresis in a 0.1M phosphate buffer at pH 2.5 in a 50 {mu}m {times} 35 cm coated capillary. Electrophoresis was accomplished in 9 minutes separating a whole histone preparation into its components in the following order of decreasing mobility; (MHP) H3, H1 (major variant), H1 (minor variant), (LHP) H3, (MHP) H2A (major variant), (LHP) H2A, H4, H2B, (MHP) H2A (minor variant) where MHP is the more hydrophobic component and LHP is the less hydrophobic component. This order of separation is very different from that found in acid-urea polyacrylamide gel electrophoresis and in reversed-phase HPLC and, thus, brings the histone biochemist a new dimension for the qualitative analysis of histone samples. 27 refs., 8 figs.

  17. High performance robotic traverse of desert terrain.

    SciTech Connect

    Whittaker, William

    2004-09-01

    This report presents tentative innovations to enable unmanned vehicle guidance for a class of off-road traverse at sustained speeds greater than 30 miles per hour. Analyses and field trials suggest that even greater navigation speeds might be achieved. The performance calls for innovation in mapping, perception, planning and inertial-referenced stabilization of components, hosted aboard capable locomotion. The innovations are motivated by the challenge of autonomous ground vehicle traverse of 250 miles of desert terrain in less than 10 hours, averaging 30 miles per hour. GPS coverage is assumed to be available with localized blackouts. Terrain and vegetation are assumed to be akin to that of the Mojave Desert. This terrain is interlaced with networks of unimproved roads and trails, which are a key to achieving the high performance mapping, planning and navigation that is presented here.

  18. High performance anode for advanced Li batteries

    SciTech Connect

    Lake, Carla

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  19. Power dependent pulse delay with asymmetric dual-core hybrid photonic crystal fiber coupler

    NASA Astrophysics Data System (ADS)

    Jing, Qi; Zhang, Xia; Wei, Wei; Huang, Yongqing; Ren, Xiaomin

    2014-02-01

    We propose a novel asymmetric dual-core hybrid photonic crystal fiber (PCF) coupler composed of a silicon tube as the left core and a silica core as the right core. The control of picosecond pulse delay is achievable by means of power adjusting. The transmission modes, dispersion characteristics and coupling coefficients of the proposed coupler are investigated numerically. The results demonstrate that it is possible to obtain 2.0 ps time delay for soliton pulse with 2.0 ps temporal width within 1 cm length. Further numerical results show that the coupler can generate 10.0 ps undistorted time advance within 5 cm length.

  20. Experimental Realization of Two Decoupled Directional Couplers in a Subwavelength Packing by Adiabatic Elimination.

    PubMed

    Mrejen, Michael; Suchowski, Haim; Hatakeyama, Taiki; Wang, Yuan; Zhang, Xiang

    2015-11-11

    On-chip optical data processing and photonic quantum integrated circuits require the integration of densely packed directional couplers at the nanoscale. However, the inherent evanescent coupling at this length scale severely limits the compactness of such on-chip photonic circuits. Here, inspired by the adiabatic elimination in a N-level atomic system, we report an experimental realization of a pair of directional couplers that are effectively isolated from each other despite their subwavelength packing. This approach opens the way to ultradense arrays of waveguide couplers for integrated optical and quantum logic gates. PMID:26421374

  1. Long-range plasmonic directional coupler switches controlled by nematic liquid crystals.

    PubMed

    Zografopoulos, D C; Beccherelli, R

    2013-04-01

    A liquid-crystal tunable plasmonic optical switch based on a long-range metal stripe directional coupler is proposed and theoretically investigated. Extensive electro-optic tuning of the coupler's characteristics is demonstrated by introducing a nematic liquid crystal layer above two coplanar plasmonic waveguides. The switching properties of the proposed plasmonic structure are investigated through rigorous liquid-crystal studies coupled with a finite-element based analysis of light propagation. A directional coupler optical switch is demonstrated, which combines very low power consumption, low operation voltages, adjustable crosstalk and coupling lengths, along with sufficiently reduced insertion losses. PMID:23571914

  2. Switching of transmission resonances in a two-channels coupler: A Boundary Wall Method scattering study

    NASA Astrophysics Data System (ADS)

    Nunes, A.; Zanetti, F. M.; Lyra, M. L.

    2016-10-01

    In this work, we study the transmission characteristics of a two-channels coupler model system using the Boundary Wall Method (BWM) to determine the solution of the corresponding scattering problem of an incident plane wave. We show that the BWM provides detailed information regarding the transmission resonances. In particular, we focus on the case of single channel input aiming to explore the energy switching performance of the coupler. We show that the coupler geometry can be tailored to allow for the first transmission resonances to be predominantly transmitted on specific output channels, an important characteristic for the realization of logical operations.

  3. Simulation of TunneLadder Traveling-Wave Tube Input/Output Coupler Characteristics Using MAFIA

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.; Qureshi, A. Haq

    1996-01-01

    RF input/output coupler characteristics for the TunneLadder traveling-wave tube have been calculated using the three-dimensional computer code, MAFIA and compared to experimental data with good agreement. Theory behind coupling of the TunneLadder interaction circuit to input and output waveguides is presented and VSWR data is calculated for variations on principal coupler dimensions to provide insight into manufacturing tolerances necessary for acceptable performance. Accuracy of results using MAFIA demonstrates how experimental hardware testing of three-dimensional coupler designs can be reduced.

  4. Aberration sensitivity of unstable resonator with semitransparent output coupler

    SciTech Connect

    Mikheyev, P.A.; Shepelenko, A.A.; Zaikin, A.P.

    1994-12-31

    Unstable resonator with semitransparent output coupler is feasible for lasers with moderate gain and large cross section of active medium. The resonator fundamental mode can be obtained up to Fresnel numbers of 10 or more. Output beam quality does not differ much from the Gaussian beam, but at the same time intensity distribution is rather flat and mode-medium coupling is better. This approach does not require the mirrors with tapered reflectivity profile. In practice the design of this type of resonator often requires to place a spherical mirror inside to fold the optical path and that inevitably causes astigmatism. Current presentation describes the results of the investigation of resonator sensitivity to intracavity astigmatism. The requirements for the resonator setup to obtain nearly unperturbed fundamental mode operation, and a convenient resonator design to meet this requirements are discussed.

  5. Element for use in an inductive coupler for downhole components

    SciTech Connect

    Hall, David R.; Fox, Joe

    2009-03-31

    An element for use in an inductive coupler for downhole components comprises an annular housing having a generally circular recess. The element further comprises a plurality of generally linear, magnetically conductive segments. Each segment includes a bottom portion, an inner wall portion, and an outer wall portion. The portions together define a generally linear trough from a first end to a second end of each segment. The segments are arranged adjacent to each other within the housing recess to form a generally circular trough. The ends of at least half of the segments are shaped such that the first end of one of the segments is complementary in form to the second end of an adjacent segment. In one embodiment, all of the ends are angled. Preferably, the first ends are angled with the same angle and the second ends are angled with the complementary angle.

  6. High Performance Geostatistical Modeling of Biospheric Resources

    NASA Astrophysics Data System (ADS)

    Pedelty, J. A.; Morisette, J. T.; Smith, J. A.; Schnase, J. L.; Crosier, C. S.; Stohlgren, T. J.

    2004-12-01

    We are using parallel geostatistical codes to study spatial relationships among biospheric resources in several study areas. For example, spatial statistical models based on large- and small-scale variability have been used to predict species richness of both native and exotic plants (hot spots of diversity) and patterns of exotic plant invasion. However, broader use of geostastics in natural resource modeling, especially at regional and national scales, has been limited due to the large computing requirements of these applications. To address this problem, we implemented parallel versions of the kriging spatial interpolation algorithm. The first uses the Message Passing Interface (MPI) in a master/slave paradigm on an open source Linux Beowulf cluster, while the second is implemented with the new proprietary Xgrid distributed processing system on an Xserve G5 cluster from Apple Computer, Inc. These techniques are proving effective and provide the basis for a national decision support capability for invasive species management that is being jointly developed by NASA and the US Geological Survey.

  7. High Performance Radiation Transport Simulations on TITAN

    SciTech Connect

    Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M; Hamilton, Steven P; Jarrell, Joshua J; Joubert, Wayne

    2012-01-01

    In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNL GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.

  8. High-performance computing MRI simulations.

    PubMed

    Stöcker, Tony; Vahedipour, Kaveh; Pflugfelder, Daniel; Shah, N Jon

    2010-07-01

    A new open-source software project is presented, JEMRIS, the Jülich Extensible MRI Simulator, which provides an MRI sequence development and simulation environment for the MRI community. The development was driven by the desire to achieve generality of simulated three-dimensional MRI experiments reflecting modern MRI systems hardware. The accompanying computational burden is overcome by means of parallel computing. Many aspects are covered that have not hitherto been simultaneously investigated in general MRI simulations such as parallel transmit and receive, important off-resonance effects, nonlinear gradients, and arbitrary spatiotemporal parameter variations at different levels. The latter can be used to simulate various types of motion, for instance. The JEMRIS user interface is very simple to use, but nevertheless it presents few limitations. MRI sequences with arbitrary waveforms and complex interdependent modules are modeled in a graphical user interface-based environment requiring no further programming. This manuscript describes the concepts, methods, and performance of the software. Examples of novel simulation results in active fields of MRI research are given.

  9. High Performance Photogrammetric Processing on Computer Clusters

    NASA Astrophysics Data System (ADS)

    Adrov, V. N.; Drakin, M. A.; Sechin, A. Y.

    2012-07-01

    Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  10. Optical design of a high power fiber optic coupler

    SciTech Connect

    English, R.E. Jr.; Halpin, J.M.; House, F.A.; Paris, R.D.

    1991-06-19

    Fiber optic beam delivery systems are replacing conventional mirror delivery systems for many reasons (e.g., system flexibility and redundancy, stability, and ease of alignment). Commercial products are available that use of fiber optic delivery for laser surgery and materials processing. Also, pump light of dye lasers can be delivered by optical fibers. Many laser wavelengths have been transported via optical fibers; high power delivery has been reported for argon, Nd:YAG, and excimer. We have been developing fiber optic beam delivery systems for copper vapor laser light; many of the fundamental properties of these systems are applicable to other high power delivery applications. A key element of fiber optic beam delivery systems is the coupling of laser light into the optical fiber. For our application this optical coupler must be robust to a range of operating parameters and laser characteristics. We have access to a high power copper vapor laser beam that is generated by a master oscillator/power amplifier (MOPA) chain comprised of three amplifiers. The light has a pulse width of 40--50 nsec with a repetition rate of about 4 kHz. The average power (nominal) to be injected into a fiber is 200 W. (We will refer to average power in this paper.) In practice, the laser beam's direction and collimation change with time. These characteristics plus other mechanical and operational constraints make it difficult for our coupler to be opto-mechanically referenced to the laser beam. We describe specifications, design, and operation of an optical system that couples a high-power copper vapor laser beam into a large core, multimode fiber. The approach used and observations reported are applicable to fiber optic delivery applications. 6 refs., 6 figs.

  11. Improving UV Resistance of High Performance Fibers

    NASA Astrophysics Data System (ADS)

    Hassanin, Ahmed

    High performance fibers are characterized by their superior properties compared to the traditional textile fibers. High strength fibers have high modules, high strength to weight ratio, high chemical resistance, and usually high temperature resistance. It is used in application where superior properties are needed such as bulletproof vests, ropes and cables, cut resistant products, load tendons for giant scientific balloons, fishing rods, tennis racket strings, parachute cords, adhesives and sealants, protective apparel and tire cords. Unfortunately, Ultraviolet (UV) radiation causes serious degradation to the most of high performance fibers. UV lights, either natural or artificial, cause organic compounds to decompose and degrade, because the energy of the photons of UV light is high enough to break chemical bonds causing chain scission. This work is aiming at achieving maximum protection of high performance fibers using sheathing approaches. The sheaths proposed are of lightweight to maintain the advantage of the high performance fiber that is the high strength to weight ratio. This study involves developing three different types of sheathing. The product of interest that need be protected from UV is braid from PBO. First approach is extruding a sheath from Low Density Polyethylene (LDPE) loaded with different rutile TiO2 % nanoparticles around the braid from the PBO. The results of this approach showed that LDPE sheath loaded with 10% TiO2 by weight achieved the highest protection compare to 0% and 5% TiO2. The protection here is judged by strength loss of PBO. This trend noticed in different weathering environments, where the sheathed samples were exposed to UV-VIS radiations in different weatheromter equipments as well as exposure to high altitude environment using NASA BRDL balloon. The second approach is focusing in developing a protective porous membrane from polyurethane loaded with rutile TiO2 nanoparticles. Membrane from polyurethane loaded with 4

  12. An evaluation of Java's I/O capabilities for high-performance computing.

    SciTech Connect

    Dickens, P. M.; Thakur, R.

    2000-11-10

    Java is quickly becoming the preferred language for writing distributed applications because of its inherent support for programming on distributed platforms. In particular, Java provides compile-time and run-time security, automatic garbage collection, inherent support for multithreading, support for persistent objects and object migration, and portability. Given these significant advantages of Java, there is a growing interest in using Java for high-performance computing applications. To be successful in the high-performance computing domain, however, Java must have the capability to efficiently handle the significant I/O requirements commonly found in high-performance computing applications. While there has been significant research in high-performance I/O using languages such as C, C++, and Fortran, there has been relatively little research into the I/O capabilities of Java. In this paper, we evaluate the I/O capabilities of Java for high-performance computing. We examine several approaches that attempt to provide high-performance I/O--many of which are not obvious at first glance--and investigate their performance in both parallel and multithreaded environments. We also provide suggestions for expanding the I/O capabilities of Java to better support the needs of high-performance computing applications.

  13. NCI's Transdisciplinary High Performance Scientific Data Platform

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  14. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  15. A parallel coupled oceanic-atmospheric general circulation model

    SciTech Connect

    Wehner, M.F.; Bourgeois, A.J.; Eltgroth, P.G.; Duffy, P.B.; Dannevik, W.P.

    1994-12-01

    The Climate Systems Modeling group at LLNL has developed a portable coupled oceanic-atmospheric general circulation model suitable for use on a variety of massively parallel (MPP) computers of the multiple instruction, multiple data (MIMD) class. The model is composed of parallel versions of the UCLA atmospheric general circulation model, the GFDL modular ocean model (MOM) and a dynamic sea ice model based on the Hiber formulation extracted from the OPYC ocean model. The strategy to achieve parallelism is twofold. One level of parallelism is accomplished by applying two dimensional domain decomposition techniques to each of the three constituent submodels. A second level of parallelism is attained by a concurrent execution of AGCM and OGCM/sea ice components on separate sets of processors. For this functional decomposition scheme, a flux coupling module has been written to calculate the heat, moisture and momentum fluxes independent of either the AGCM or the OGCM modules. The flux coupler`s other roles are to facilitate the transfer of data between subsystem components and processors via message passing techniques and to interpolate and aggregate between the possibly incommensurate meshes.

  16. High Performance Computing CFRD -- Final Technial Report

    SciTech Connect

    Hope Forsmann; Kurt Hamman

    2003-01-01

    The Bechtel Waste Treatment Project (WTP), located in Richland, WA, is comprised of many processes containing complex physics. Accurate analyses of the underlying physics of these processes is needed to reduce the amount of added costs during and after construction that are due to unknown process behavior. The WTP will have tight operating margins in order to complete the treatment of the waste on schedule. The combination of tight operating constraints coupled with complex physical processes requires analysis methods that are more accurate than traditional approaches. This study is focused specifically on multidimensional computer aided solutions. There are many skills and tools required to solve engineering problems. Many physical processes are governed by nonlinear partial differential equations. These governing equations have few, if any, closed form solutions. Past and present solution methods require assumptions to reduce these equations to solvable forms. Computational methods take the governing equations and solve them directly on a computational grid. This ability to approach the equations in their exact form reduces the number of assumptions that must be made. This approach increases the accuracy of the solution and its applicability to the problem at hand. Recent advances in computer technology have allowed computer simulations to become an essential tool for problem solving. In order to perform computer simulations as quickly and accurately as possible, both hardware and software must be evaluated. With regards to hardware, the average consumer personal computers (PCs) are not configured for optimal scientific use. Only a few vendors create high performance computers to satisfy engineering needs. Software must be optimized for quick and accurate execution. Operating systems must utilize the hardware efficiently while supplying the software with seamless access to the computer’s resources. From the perspective of Bechtel Corporation and the Idaho

  17. High performance computations using dynamical nucleation theory

    SciTech Connect

    Windus, Theresa L.; Kathmann, Shawn M.; Crosby, Lonnie D.

    2008-07-14

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities are described. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A "master-slave" solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are also described. This work was supported by the U.S. Department of Energy's (DOE) Office of Basic Energy Sciences, Chemical Sciences program. The Pacific Northwest National Laboratory is operated by Battelle for DOE.

  18. High-performance computers for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  19. High Performance Oxides-Based Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Ren, Guangkun; Lan, Jinle; Zeng, Chengcheng; Liu, Yaochun; Zhan, Bin; Butt, Sajid; Lin, Yuan-Hua; Nan, Ce-Wen

    2015-01-01

    Thermoelectric materials have attracted much attention due to their applications in waste-heat recovery, power generation, and solid state cooling. In comparison with thermoelectric alloys, oxide semiconductors, which are thermally and chemically stable in air at high temperature, are regarded as the candidates for high-temperature thermoelectric applications. However, their figure-of-merit ZT value has remained low, around 0.1-0.4 for more than 20 years. The poor performance in oxides is ascribed to the low electrical conductivity and high thermal conductivity. Since the electrical transport properties in these thermoelectric oxides are strongly correlated, it is difficult to improve both the thermoelectric power and electrical conductivity simultaneously by conventional methods. This review summarizes recent progresses on high-performance oxide-based thermoelectric bulk-materials including n-type ZnO, SrTiO3, and In2O3, and p-type Ca3Co4O9, BiCuSeO, and NiO, enhanced by heavy-element doping, band engineering and nanostructuring.

  20. High performance vapour-cell frequency standards

    NASA Astrophysics Data System (ADS)

    Gharavipour, M.; Affolderbach, C.; Kang, S.; Bandi, T.; Gruet, F.; Pellaton, M.; Mileti, G.

    2016-06-01

    We report our investigations on a compact high-performance rubidium (Rb) vapour-cell clock based on microwave-optical double-resonance (DR). These studies are done in both DR continuous-wave (CW) and Ramsey schemes using the same Physics Package (PP), with the same Rb vapour cell and a magnetron-type cavity with only 45 cm3 external volume. In the CW-DR scheme, we demonstrate a DR signal with a contrast of 26% and a linewidth of 334 Hz; in Ramsey-DR mode Ramsey signals with higher contrast up to 35% and a linewidth of 160 Hz have been demonstrated. Short-term stabilities of 1.4×10-13 τ-1/2 and 2.4×10-13 τ-1/2 are measured for CW-DR and Ramsey-DR schemes, respectively. In the Ramsey-DR operation, thanks to the separation of light and microwave interactions in time, the light-shift effect has been suppressed which allows improving the long-term clock stability as compared to CW-DR operation. Implementations in miniature atomic clocks are considered.

  1. Low-Cost High-Performance MRI.

    PubMed

    Sarracanie, Mathieu; LaPierre, Cristen D; Salameh, Najat; Waddington, David E J; Witzel, Thomas; Rosen, Matthew S

    2015-01-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm(3) imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices. PMID:26469756

  2. Low-Cost High-Performance MRI

    NASA Astrophysics Data System (ADS)

    Sarracanie, Mathieu; Lapierre, Cristen D.; Salameh, Najat; Waddington, David E. J.; Witzel, Thomas; Rosen, Matthew S.

    2015-10-01

    Magnetic Resonance Imaging (MRI) is unparalleled in its ability to visualize anatomical structure and function non-invasively with high spatial and temporal resolution. Yet to overcome the low sensitivity inherent in inductive detection of weakly polarized nuclear spins, the vast majority of clinical MRI scanners employ superconducting magnets producing very high magnetic fields. Commonly found at 1.5-3 tesla (T), these powerful magnets are massive and have very strict infrastructure demands that preclude operation in many environments. MRI scanners are costly to purchase, site, and maintain, with the purchase price approaching $1 M per tesla (T) of magnetic field. We present here a remarkably simple, non-cryogenic approach to high-performance human MRI at ultra-low magnetic field, whereby modern under-sampling strategies are combined with fully-refocused dynamic spin control using steady-state free precession techniques. At 6.5 mT (more than 450 times lower than clinical MRI scanners) we demonstrate (2.5 × 3.5 × 8.5) mm3 imaging resolution in the living human brain using a simple, open-geometry electromagnet, with 3D image acquisition over the entire brain in 6 minutes. We contend that these practical ultra-low magnetic field implementations of MRI (<10 mT) will complement traditional MRI, providing clinically relevant images and setting new standards for affordable (<$50,000) and robust portable devices.

  3. USING MULTITAIL NETWORKS IN HIGH PERFORMANCE CLUSTERS

    SciTech Connect

    S. COLL; E. FRACHTEMBERG; F. PETRINI; A. HOISIE; L. GURVITS

    2001-03-01

    Using multiple independent networks (also known as rails) is an emerging technique to overcome bandwidth limitations and enhance fault-tolerance of current high-performance clusters. We present and analyze various venues for exploiting multiple rails. Different rail access policies are presented and compared, including static and dynamic allocation schemes. An analytical lower bound on the number of networks required for static rail allocation is shown. We also present an extensive experimental comparison of the behavior of various allocation schemes in terms of bandwidth and latency. Striping messages over multiple rails can substantially reduce network latency, depending on average message size, network load and allocation scheme. The methods compared include a static rail allocation, a round-robin rail allocation, a dynamic allocation based on local knowledge, and a rail allocation that reserves both end-points of a message before sending it. The latter is shown to perform better than other methods at higher loads: up to 49% better than local-knowledge allocation and 37% better than the round-robin allocation. This allocation scheme also shows lower latency and it saturates on higher loads (for messages large enough). Most importantly, this proposed allocation scheme scales well with the number of rails and message sizes.

  4. Towards high performance inverted polymer solar cells

    NASA Astrophysics Data System (ADS)

    Gong, Xiong

    2013-03-01

    Bulk heterojunction polymer solar cells that can be fabricated by solution processing techniques are under intense investigation in both academic institutions and industrial companies because of their potential to enable mass production of flexible and cost-effective alternative to silicon-based electronics. Despite the envisioned advantages and recent technology advances, so far the performance of polymer solar cells is still inferior to inorganic counterparts in terms of the efficiency and stability. There are many factors limiting the performance of polymer solar cells. Among them, the optical and electronic properties of materials in the active layer, device architecture and elimination of PEDOT:PSS are the most determining factors in the overall performance of polymer solar cells. In this presentation, I will present how we approach high performance of polymer solar cells. For example, by developing novel materials, fabrication polymer photovoltaic cells with an inverted device structure and elimination of PEDOT:PSS, we were able to observe over 8.4% power conversion efficiency from inverted polymer solar cells.

  5. An integrated high performance Fastbus slave interface

    SciTech Connect

    Christiansen, J.; Ljuslin, C. )

    1993-08-01

    A high performance CMOS Fastbus slave interface ASIC (Application Specific Integrated Circuit) supporting all addressing and data transfer modes defined in the IEEE 960 - 1986 standard is presented. The FAstbus Slave Integrated Circuit (FASIC) is an interface between the asynchronous Fastbus and a clock synchronous processor/memory bus. It can work stand-alone or together with a 32 bit microprocessor. The FASIC is a programmable device enabling its direct use in many different applications. A set of programmable address mapping windows can map Fastbus addresses to convenient memory addresses and at the same time act as address decoding logic. Data rates of 100 MBytes/sec to Fastbus can be obtained using an internal FIFO in the FASIC to buffer data between the two buses during block transfers. Message passing from Fastbus to a microprocessor on the slave module is supported. A compact (70 mm x 170 mm) Fastbus slave piggy back sub-card interface including level conversion between ECL and TTL signal levels has been implemented using surface mount components and the 208 pin FASIC chip.

  6. High performance composites with active stiffness control.

    PubMed

    Tridech, Charnwit; Maples, Henry A; Robinson, Paul; Bismarck, Alexander

    2013-09-25

    High performance carbon fiber reinforced composites with controllable stiffness could revolutionize the use of composite materials in structural applications. Here we describe a structural material, which has a stiffness that can be actively controlled on demand. Such a material could have applications in morphing wings or deployable structures. A carbon fiber reinforced-epoxy composite is described that can undergo an 88% reduction in flexural stiffness at elevated temperatures and fully recover when cooled, with no discernible damage or loss in properties. Once the stiffness has been reduced, the required deformations can be achieved at much lower actuation forces. For this proof-of-concept study a thin polyacrylamide (PAAm) layer was electrocoated onto carbon fibers that were then embedded into an epoxy matrix via resin infusion. Heating the PAAm coating above its glass transition temperature caused it to soften and allowed the fibers to slide within the matrix. To produce the stiffness change the carbon fibers were used as resistance heating elements by passing a current through them. When the PAAm coating had softened, the ability of the interphase to transfer load to the fibers was significantly reduced, greatly lowering the flexural stiffness of the composite. By changing the moisture content in PAAm fiber coating, the temperature at which the PAAm softens and the composites undergo a reduction in stiffness can be tuned. PMID:23978266

  7. Fast surrogate-assisted simulation-driven optimization of compact microwave hybrid couplers

    NASA Astrophysics Data System (ADS)

    Kurgan, Piotr; Koziel, Slawomir

    2016-07-01

    This work presents a robust methodology for expedited simulation-driven design optimization of compact microwave hybrid couplers. The technique relies on problem decomposition, and a bottom-up design strategy, starting from the level of basic building blocks of the coupler, and finishing with a tuning procedure that exploits a fast surrogate model of the entire structure. The latter is constructed by cascading local response surface approximations of coupler elementary elements. The cross-coupling effects within the structure are neglected in the first stage of the design process; however, they are accounted for in the tuning phase by means of space-mapping correction of the surrogate. The proposed approach is demonstrated through the design of a compact rat-race and two branch-line couplers. In all cases, the computational cost of the optimization process is very low and corresponds to just a few high-fidelity electromagnetic simulations of respective structures. Experimental validation is also provided.

  8. Research of advanced optical coupler coating technology on extending lifetime of high power laser

    NASA Astrophysics Data System (ADS)

    Xu, Cheng-lin; Si, Xu; Mu, Wei; Ma, Yun-liang; Xiao, Chun

    2015-10-01

    We studied the coating technology, research shows that: to coat the internal structure of coupler we need to consider both intensity problem and heat dissipation problem. For instance: thicker coating will increase the coupler's resistance to stress and resistance to water vapor, but we will prefer a thinner coating because it is easier to let the light pass though and generate less heat. We've tried a number of different coating materials, and analyzed the adhesion during its curing process. Finally, according to the experimental results, we believe that cooling capacity needs to be first considered. Recent experimental results show that we can use advanced coupler coating technology to extend the working life of the coupler. At the end of paper, we provide a coating example and show its real contribution to the working life.

  9. Total internal reflection-based planar waveguide solar concentrator with symmetric air prisms as couplers.

    PubMed

    Xie, Peng; Lin, Huichuan; Liu, Yong; Li, Baojun

    2014-10-20

    We present a waveguide coupling approach for planar waveguide solar concentrator. In this approach, total internal reflection (TIR)-based symmetric air prisms are used as couplers to increase the coupler reflectivity and to maximize the optical efficiency. The proposed concentrator consists of a line focusing cylindrical lens array over a planar waveguide. The TIR-based couplers are located at the focal line of each lens to couple the focused sunlight into the waveguide. The optical system was modeled and simulated with a commercial ray tracing software (Zemax). Results show that the system used with optimized TIR-based couplers can achieve 70% optical efficiency at 50 × geometrical concentration ratio, resulting in a flux concentration ratio of 35 without additional secondary concentrator. An acceptance angle of ± 7.5° is achieved in the x-z plane due to the use of cylindrical lens array as the primary concentrator.

  10. The variable input coupler for the Fermilab Vertical Cavity Test Facility

    SciTech Connect

    Champion, Mark; Ginsburg, Camille M.; Lunin, Andrei; Moeller, Wolf-Dietrich; Nehring, Roger; Poloubotko, Valeri; /Fermilab

    2008-09-01

    A variable input coupler has been designed for the Fermilab vertical cavity test facility (VCTF), a facility for CW RF vertical testing of bare ILC 1.3 GHz 9-cell SRF cavities at 2K, to provide some flexibility in the test stand RF measurements. The variable coupler allows the cavity to be critically coupled for all RF tests, including all TM010 passband modes, which will simplify or make possible the measurement of those modes with very low end-cell fields, e.g., {pi}/9 mode. The variable coupler assembly mounts to the standard input coupler port on the cavity, and uses a cryogenic motor submerged in superfluid helium to control the antenna position. The RF and mechanical design and RF test results are described.

  11. Expanded-mode semiconductor laser with tapered-rib adiabatic-following fiber coupler

    SciTech Connect

    Vawter, G.A.; Smith, R.E.; Hou, H.; Wendt, J.R.

    1997-02-01

    A new diode laser using a Tapered-Rib Adiabatic-Following Fiber Coupler to achieve 2D mode expansion and narrow, symmetric far-field emission without epitaxial regrowth or sharply-defined tips on tapered waveguides is presented.

  12. Design and characterization of low-loss 2D grating couplers for silicon photonics integrated circuits

    NASA Astrophysics Data System (ADS)

    Lacava, C.; Carrol, L.; Bozzola, A.; Marchetti, R.; Minzioni, P.; Cristiani, I.; Fournier, M.; Bernabe, S.; Gerace, D.; Andreani, L. C.

    2016-03-01

    We present the characterization of Silicon-on-insulator (SOI) photonic-crystal based 2D grating-couplers (2D-GCs) fabricated by CEA-Leti in the frame of the FP7 Fabulous project, which is dedicated to the realization of devices and systems for low-cost and high-performance passives-optical-networks. On the analyzed samples different test structures are present, including 2D-GC connected to another 2D-GC by different waveguides (in a Mach-Zehnder like configuration), and 2D-GC connected to two separate 2D-GCs, so as to allow a complete assessment of different parameters. Measurements were carried out using a tunable laser source operating in the extended telecom bandwidth and a fiber-based polarization controlling system at the input of device-under-test. The measured data yielded an overall fiber-to-fiber loss of 7.5 dB for the structure composed by an input 2D-GC connected to two identical 2D-GCs. This value was obtained at the peak wavelength of the grating, and the 3-dB bandwidth of the 2D-GC was assessed to be 43 nm. Assuming that the waveguide losses are negligible, so as to make a worst-case analysis, the coupling efficiency of the single 2D-GC results to be equal to -3.75 dB, constituting, to the best of our knowledge, the lowest value ever reported for a fully CMOS compatible 2D-GC. It is worth noting that both the obtained values are in good agreement with those expected by the numerical simulations performed using full 3D analysis by Lumerical FDTD-solutions.

  13. Asymmetric plasmonic-dielectric coupler with short coupling length, high extinction ratio, and low insertion loss.

    PubMed

    Li, Qiang; Song, Yi; Zhou, Gan; Su, Yikai; Qiu, Min

    2010-10-01

    Asymmetric directional coupling between a hybrid plasmonic waveguide with subwavelength field confinement and a conventional dielectric waveguide is investigated. The proposed hybrid coupler features short coupling length, high coupling efficiency, high extinction ratio, and low insertion loss; it can also be integrated into a silicon-based platform. This coupler can be potentially adopted for signal routing between plasmonic waveguides and dielectric waveguides in photonic integrated circuits. Furthermore, it can be exploited to efficiently excite hybrid plasmonic modes with conventional dielectric modes.

  14. Wet-chemical fabrication of a single leakage-channel grating coupler

    NASA Astrophysics Data System (ADS)

    Weisenbach, Lori; Zelinski, Brian J. J.; Roncone, Ronald L.; Burke, James J.

    1995-04-01

    We demonstrate the fabrication of a unique optical device, the single leakage-channel grating coupler, using sol-gel techniques. Design specifications are outlined to establish the material criteria for the sol-gel compositions. Material choice and preparation are described. We evaluate the characteristics and performance of the single leakage-channel grating coupler by comparing the predicted and the measured branching ratios. The branching ratio of the solution-derived device is within 3% of the theoretically predicted value.

  15. A Novel Multimode Waveguide Coupler for Accurate Power Measurement of Traveling Wave Tube Harmonic Frequencies

    NASA Technical Reports Server (NTRS)

    Wintucky, Edwin G.; Simons, Rainee N.

    2014-01-01

    This paper presents the design, fabrication and test results for a novel waveguide multimode directional coupler (MDC). The coupler fabricated from two dissimilar waveguides is capable of isolating the power at the second harmonic frequency from the fundamental power at the output port of a traveling-wave tube (TWT). In addition to accurate power measurements at harmonic frequencies, a potential application of the MDC is in the design of a beacon source for atmospheric propagation studies at millimeter-wave frequencies.

  16. Direct observation of Landau-Zener tunneling in a curved optical waveguide coupler

    SciTech Connect

    Dreisow, F.; Szameit, A.; Heinrich, M.; Nolte, S.; Tuennermann, A.; Ornigotti, M.; Longhi, S.

    2009-05-15

    An electromagnetic realization of Landau-Zener (LZ) tunneling is experimentally demonstrated in femtosecond-laser written waveguide couplers with a cubically bent axis. Quantitative measurements of light evolution inside the coupler, based on fluorescence imaging, enable to trace the detailed dynamics of the LZ process. The experimental results are in good agreement with the theoretical LZ model for linear crossing of energy levels with constant coupling of finite duration.

  17. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    SciTech Connect

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on high performance computing platforms.

  18. A high-performance MPI implementation on a shared-memory vector supercomputer.

    SciTech Connect

    Gropp, W.; Lusk, E.; Mathematics and Computer Science

    1997-01-01

    In this article we recount the sequence of steps by which MPICH, a high-performance, portable implementation of the Message-Passing Interface (MPI) standard, was ported to the NEC SX-4, a high-performance parallel supercomputer. Each step in the sequence raised issues that are important for shared-memory programming in general and shed light on both MPICH and the SX-4. The result is a low-latency, very high bandwidth implementation of MPI for the NEC SX-4. In the process, MPICH was also improved in several general ways.

  19. High-performance laboratories and cleanrooms

    SciTech Connect

    Tschudi, William; Sartor, Dale; Mills, Evan; Xu, Tengfang

    2002-07-01

    The California Energy Commission sponsored this roadmap to guide energy efficiency research and deployment for high performance cleanrooms and laboratories. Industries and institutions utilizing these building types (termed high-tech buildings) have played an important part in the vitality of the California economy. This roadmap's key objective to present a multi-year agenda to prioritize and coordinate research efforts. It also addresses delivery mechanisms to get the research products into the market. Because of the importance to the California economy, it is appropriate and important for California to take the lead in assessing the energy efficiency research needs, opportunities, and priorities for this market. In addition to the importance to California's economy, energy demand for this market segment is large and growing (estimated at 9400 GWH for 1996, Mills et al. 1996). With their 24hr. continuous operation, high tech facilities are a major contributor to the peak electrical demand. Laboratories and cleanrooms constitute the high tech building market, and although each building type has its unique features, they are similar in that they are extremely energy intensive, involve special environmental considerations, have very high ventilation requirements, and are subject to regulations--primarily safety driven--that tend to have adverse energy implications. High-tech buildings have largely been overlooked in past energy efficiency research. Many industries and institutions utilize laboratories and cleanrooms. As illustrated, there are many industries operating cleanrooms in California. These include semiconductor manufacturing, semiconductor suppliers, pharmaceutical, biotechnology, disk drive manufacturing, flat panel displays, automotive, aerospace, food, hospitals, medical devices, universities, and federal research facilities.

  20. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  1. High-Performance Monopropellants and Catalysts Evaluated

    NASA Technical Reports Server (NTRS)

    Reed, Brian D.

    2004-01-01

    The NASA Glenn Research Center is sponsoring efforts to develop advanced monopropellant technology. The focus has been on monopropellant formulations composed of an aqueous solution of hydroxylammonium nitrate (HAN) and a fuel component. HAN-based monopropellants do not have a toxic vapor and do not need the extraordinary procedures for storage, handling, and disposal required of hydrazine (N2H4). Generically, HAN-based monopropellants are denser and have lower freezing points than N2H4. The performance of HAN-based monopropellants depends on the selection of fuel, the HAN-to-fuel ratio, and the amount of water in the formulation. HAN-based monopropellants are not seen as a replacement for N2H4 per se, but rather as a propulsion option in their own right. For example, HAN-based monopropellants would prove beneficial to the orbit insertion of small, power-limited satellites because of this propellant's high performance (reduced system mass), high density (reduced system volume), and low freezing point (elimination of tank and line heaters). Under a Glenn-contracted effort, Aerojet Redmond Rocket Center conducted testing to provide the foundation for the development of monopropellant thrusters with an I(sub sp) goal of 250 sec. A modular, workhorse reactor (representative of a 1-lbf thruster) was used to evaluate HAN formulations with catalyst materials. Stoichiometric, oxygen-rich, and fuelrich formulations of HAN-methanol and HAN-tris(aminoethyl)amine trinitrate were tested to investigate the effects of stoichiometry on combustion behavior. Aerojet found that fuelrich formulations degrade the catalyst and reactor faster than oxygen-rich and stoichiometric formulations do. A HAN-methanol formulation with a theoretical Isp of 269 sec (designated HAN269MEO) was selected as the baseline. With a combustion efficiency of at least 93 percent demonstrated for HAN-based monopropellants, HAN269MEO will meet the I(sub sp) 250 sec goal.

  2. Experience with high-performance PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Goldburgh, Mitchell M.; Head, Calvin

    1997-05-01

    Lockheed Martin (Loral) has installed PACS with associated teleradiology in several tens of hospitals. The PACS that have been installed have been the basis for a shift to filmless radiology in many of the hospitals. the basic structure for the PACS and the teleradiology that is being used is outlined. The way that the PACS are being used in the hospitals is instructive. The three most used areas for radiology in the hospital are the wards including the ICU wards, the emergency room, and the orthopedics clinic. The examinations are mostly CR images with 20 percent to 30 percent of the examinations being CT, MR, and ultrasound exams. The PACS are being used to realize improved productivity for radiology and for the clinicians. For radiology the same staff is being used for 30 to 50 percent more workload. For the clinicians 10 to 20 percent of their time is being saved in dealing with radiology images. The improved productivity stems from the high performance of the PACS that has been designed and installed. Images are available on any workstation in the hospital within less than two seconds, even during the busiest hour of the day. The examination management functions to restrict the attention of any one user to the examinations that are of interest. The examination management organizes the workflow through the radiology department and the hospital, improving the service of the radiology department by reducing the time until the information from a radiology examination is available. The remaining weak link in the PACS system is transcription. The examination can be acquired, read, an the report dictated in much less than ten minutes. The transcription of the dictated reports can take from a few hours to a few days. The addition of automatic transcription services will remove this weak link.

  3. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  4. Design of high performance piezo composites actuators

    NASA Astrophysics Data System (ADS)

    Almajid, Abdulhakim A.

    Design of high performance piezo composites actuators are developed. Functionally Graded Microstructure (FGM) piezoelectric actuators are designed to reduce the stress concentration at the middle interface existed in the standard bimorph actuators while maintaining high actuation performance. The FGM piezoelectric laminates are composite materials with electroelastic properties varied through the laminate thickness. The elastic behavior of piezo-laminates actuators is developed using a 2D-elasticity model and a modified classical lamination theory (CLT). The stresses and out-of-plane displacements are obtained for standard and FGM piezoelectric bimorph plates under cylindrical bending generated by an electric field throughout the thickness of the laminate. The analytical model is developed for two different actuator geometries, a rectangular plate actuator and a disk shape actuator. The limitations of CLT are investigated against the 2D-elasticity model for the rectangular plate geometry. The analytical models based on CLT (rectangular and circular) and 2D-elasticity are compared with a model based on Finite Element Method (FEM). The experimental study consists of two FGM actuator systems, the PZT/PZT FGM system and the porous FGM system. The electroelastic properties of each layer in the FGM systems were measured and input in the analytical models to predict the FGM actuator performance. The performance of the FGM actuator is optimized by manipulating the thickness of each layer in the FGM system. The thickness of each layer in the FGM system is made to vary in a linear or non-linear manner to achieve the best performance of the FGM piezoelectric actuator. The analytical and FEM results are found to agree well with the experimental measurements for both rectangular and disk actuators. CLT solutions are found to coincide well with the elasticity solutions for high aspect ratios while the CLT solutions gave poor results compared to the 2D elasticity solutions for

  5. Design Procedure and Fabrication of Reproducible Silicon Vernier Devices for High-Performance Refractive Index Sensing

    PubMed Central

    Troia, Benedetto; Khokhar, Ali Z.; Nedeljkovic, Milos; Reynolds, Scott A.; Hu, Youfang; Mashanovich, Goran Z.; Passaro, Vittorio M. N.

    2015-01-01

    In this paper, we propose a generalized procedure for the design of integrated Vernier devices for high performance chemical and biochemical sensing. In particular, we demonstrate the accurate control of the most critical design and fabrication parameters of silicon-on-insulator cascade-coupled racetrack resonators operating in the second regime of the Vernier effect, around 1.55 μm. The experimental implementation of our design strategies has allowed a rigorous and reliable investigation of the influence of racetrack resonator and directional coupler dimensions as well as of waveguide process variability on the operation of Vernier devices. Figures of merit of our Vernier architectures have been measured experimentally, evidencing a high reproducibility and a very good agreement with the theoretical predictions, as also confirmed by relative errors even lower than 1%. Finally, a Vernier gain as high as 30.3, average insertion loss of 2.1 dB and extinction ratio up to 30 dB have been achieved. PMID:26067193

  6. Modeling and validation of high-performance and athermal AWGs for the silicon photonics platform

    NASA Astrophysics Data System (ADS)

    Tondini, Stefano; Castellan, Claudio; Mancinelli, Mattia; Pavesi, Lorenzo

    2016-05-01

    Array waveguide gratings (AWGs) are a key component in WDM systems, allowing for de-multiplexing and routing of wavelength channels. A high-resolution AWG able to satisfy challenging requirements in terms of insertion loss and X-talk is what is needed to contribute to the paradigm change in the deployment of optical communication that is nowadays occurring within the ROADM architectures. In order to improve the performances and keep down the footprint, we modified the design at the star coupler (SC) and at the bending stages. We evaluated how the background noise is modified within a whiskered-shaped SC optimized to reduce the re ectivity of the SOI slab and keep down back-scattered optical signal. A dedicated heating circuit has also been designed, in order to allow for an overall tuning of the channel-output. A high-performance AWG has also to cope with possible thermal-induced environmental changes, especially in the case of integration within a Photonic Integrated Circuit (PIC). Therefore, we suggested a way to reduce the thermal-sensitivity.

  7. Towards a fully packaged high-performance RF sensor featuring slotted photonic crystal waveguides

    NASA Astrophysics Data System (ADS)

    Chung, Chi-Jui; Subbaraman, Harish; Zhang, Xingyu; Yan, Hai; Luo, Jingdong; Jen, Alex K.-Y.; Nelson, Robert L.; Lee, Charles Y.-C.; Chen, Ray T.

    2016-02-01

    A low loss and high sensitivity X-band RF sensor based on electro-optic (EO) polymer filled silicon slot photonic crystal waveguides (PCW) and bowtie antenna is proposed. By taking advantage of the slow light enhancementt in the PCW(>20X), large EO coefficient of the EO polymer(r33>200pm/V), as well as significant electric field enhancement of bowtie antenna on silicon dioxide substrate(>10000X), we can realize a large in-device EO coefficient over 1000pm/V so as to realize a high performance RF wave sensor. In addition, on-chip Mach-Zender interferometer (MZI) layout working under push-pull configuration is adopted to further increase the sensitivity of the sensor. Furthermore, inverse taper couplers and slotted photonic crystal waveguides are carefully designed and discussed in this paper to reduce the insertion loss of the device so as to increase the device signal-to-noise ratio. The minimum detectable electromagnetic power density is pushed down to 2.05 mW/m2, corresponding to a minimum sensing electric field of 0.61 V/m. This photonic RF sensor has several important advantages over conventional electronics RF sensors based on electrical scheme including high data throughput, compact in size, and great immunity to electromagnetic interference (EMI).

  8. Analysis of a single ring resonator with 2×2 90-degree multimode waveguide turning couplers

    NASA Astrophysics Data System (ADS)

    Chiu, C. L.; Liao, Yen-Hsun

    2016-02-01

    A novel design of a single ring resonator with two low-loss 2×2 90-degree multimode waveguide turning mirror couplers based on a InP structure. The coupling factor of the 2×2 90-degree multimode waveguide turning mirror coupler is inversed for K=0.85 to K=0.15 when one folding is achieved. The 2×2 90-degree turning mirror coupler for K=0.15 is (3/4)Lπ in length. Its length is reduced 3 times than the conventional straight 2×2 multimode waveguide interference coupler (9/4)Lπ in length for K=0.15. The cavity length of the curve waveguide (90-degree arc length) in this ring resonator with two 2×2 90-degree multimode waveguide turning couplers is decreased 1/2 times than with two 2×2 MMI couplers (180-degree arc length). The free spectral range (FSR) is increased 2 times. The output spectral response gets a FSR of 82 GHz for the device and a contrast of 4 dB and FWHM of 0.24 nm for the drop port. The results of numerical analysis calculated by the transfer functions in a single ring resonator are agreement with the experimental results.

  9. An optical coupler of natural light guiding system based on stepped structure

    NASA Astrophysics Data System (ADS)

    Pan, Po-Hsuan; Chen, Yi-Yung; Whang, Allen Jong-Woei

    2009-08-01

    For saving energy and healthy lighting, many researches focus on the sunlight illumination system. A Natural Light Guiding System can be separated into collecting, transmitting, and lighting parts. With a cascadable concentrator in the collecting part, the transmitting part will use large number of fibers. It means the most of cost is on the transmitting part. With an N to 1 coupler, the number of lightpipe can be reduced quickly. In general, the optical coupler is tapered structure. According to the Etendue principle, however, the product of beam angle and area is contact. The beam angle of coupled sunlight will increase that isn't easy coupled again and transmitted with long distance. The total energy of the exit beam from the N to 1 coupler should be bigger than the energy of one incident beam. In this paper, we use stepped structure to design an optical coupler for coupling N to 1. In the research, the Natural Light Guiding System with the optical coupler is simulated, and we evaluate the parameters of the stepped structure. Finally, we analyze the coupled efficiency of the coupler.

  10. Design of coupler for the NSLS-II storage ring superconducting RF cavity

    SciTech Connect

    Yeddulla, M.; Rose, J.

    2011-03-28

    NSLS-II is a 3GeV, 500mA, high brightness, 1 MW beam power synchrotron facility that is designed with four superconducting cavities working at 499.68 MHz. To operate the cavities in over-damped coupling condition, an External Quality Factor (Qext) of {approx}65000 is required. We have modified the existing coupler for the CESR-B cavity which has a Qext of {approx}200,000 to meet the requirements of NSLS-II. CESR-B cavity has an aperture coupler with a coupler 'tongue' connecting the cavity to the waveguide. We have optimized the length, width and thickness of the 'tongue' as well as the width of the aperture to increase the coupling using the three dimensional electromagnetic field solver, HFSS. Several possible designs will be presented. We have modified the coupler of the CESR-B cavity to be used in the storage ring at the NSLS-II project using HFSS and verified using CST Microwave Studio. Using a combination of increasing the length and width of the coupler tongue and increasing the width of the aperture, the external Q of the cavity coupler was decreased to {approx}65000 as required for the design of the NSLS-II storage ring design.

  11. Wakefield and RF Kicks Due to Coupler Asymmetry in TESLA-Type Accelerating Cavities

    SciTech Connect

    Bane, K.L.F.; Adolphsen, C.; Li, Z.; Dohlus, M.; Zagorodnov, I.; Gonin, I.; Lunin, A.; Solyak, N.; Yakovlev, V.; Gjonaj, E.; Weiland, T.; /Darmstadt, Tech. Hochsch.

    2008-07-07

    In a future linear collider, such as the International Linear Collider (ILC), trains of high current, low emittance bunches will be accelerated in a linac before colliding at the interaction point. Asymmetries in the accelerating cavities of the linac will generate fields that will kick the beam transversely and degrade the beam emittance and thus the collider performance. In the main linac of the ILC, which is filled with TESLA-type superconducting cavities, it is the fundamental (FM) and higher mode (HM) couplers that are asymmetric and thus the source of such kicks. The kicks are of two types: one, due to (the asymmetry in) the fundamental RF fields and the other, due to transverse wakefields that are generated by the beam even when it is on axis. In this report we calculate the strength of these kicks and estimate their effect on the ILC beam. The TESLA cavity comprises nine cells, one HM coupler in the upstream end, and one (identical, though rotated) HM coupler and one FM coupler in the downstream end (for their shapes and location see Figs. 1, 2) [1]. The cavity is 1.1 m long, the iris radius 35 mm, and the coupler beam pipe radius 39 mm. Note that the couplers reach closer to the axis than the irises, down to a distance of 30 mm.

  12. Asymmetric hollow POF coupler design for portable optical access card system

    NASA Astrophysics Data System (ADS)

    Ehsan, Abang Annuar; Shaari, Sahbudin; Abd Rahman, Mohd Kamil

    2009-05-01

    An optical code generating device using plastic optical fiber (POF) coupler for portable optical access card system is presented. The code generating device constructed using asymmetric hollow POF coupler design provides a unique series of output light intensities which are successively used as an optical code. Each coupler will be assigned with a unique optical code based on the asymmetrical waveguide design. Non-sequential ray tracing simulation of various coupler designs showed a linear relationship between the tap-off ratio (TOFR) and the waveguide tap width. The results for the simulated and fabricated 1x2 asymmetric couplers show the same linear characteristics between the TOFR and the tap width. The simulated devices show a TOFR variation from 18.6% to 49.9% whereas the TOFR for the fabricated metal-based devices varies from 10.7% up to 47.7%, for a tap width of 500 μm to 1 mm. The insertion loss for the 1x2 asymmetric coupler at the tap line varies from 12.7 dB to 21.2 dB whereas for the bus line, the average insertion loss is about 12 dB.

  13. Comparative study on compact planar waveguide based photonic integrated couplers using simple effective index method

    NASA Astrophysics Data System (ADS)

    Deka, Bidyut; Dutta, Aradhana; Sahu, Partha P.

    2013-11-01

    The miniaturization of photonic components in integrated optic waveguide devices to microscale platform has attracted enormous attention from the researchers and entrepreneurs. In this paper, we present and report a comparative study of photonic integrated planar waveguide based couplers using a mathematical model based on sinusoidal mode simple effective index method (SEIM). The basic photonic integrated components such as directional coupler (DC), two mode interference (TMI) coupler and multimode interference (MMI) coupler have been designed and fabricated using the versatile SiON waveguide technology (SiON as the waveguide core material using silica waveguide). The experimental results have been compared with the SEIM based theoretical results and further verified with the commercially available software tool based on beam propagation method (BPM). With a focus towards device compactness, particular emphasis is placed on device geometry in an endeavour to achieve the same. In this direction, the theoretical and experimental results obtained have been compared with tooth shaped grating assisted geometry for these photonic components. It is found that the grating assisted structures have the beat length ~0.5 times lower than that of the conventional geometry. Further it is seen that the beat length of TMI coupler is smaller compared to the DC and MMI coupler.

  14. High performance fuel element with end seal

    DOEpatents

    Lee, Gary E.; Zogg, Gordon J.

    1987-01-01

    A nuclear fuel element comprising an elongate block of refractory material having a generally regular polygonal cross section. The block includes parallel, spaced, first and second end surfaces. The first end surface has a peripheral sealing flange formed thereon while the second end surface has a peripheral sealing recess sized to receive the flange. A plurality of longitudinal first coolant passages are positioned inwardly of the flange and recess. Elongate fuel holes are separate from the coolant passages and disposed inwardly of the flange and the recess. The block is further provided with a plurality of peripheral second coolant passages in general alignment with the flange and the recess for flowing coolant. The block also includes two bypasses for each second passage. One bypass intersects the second passage adjacent to but spaced from the first end surface and intersects a first passage, while the other bypass intersects the second passage adjacent to but spaced from the second end surface and intersects a first passage so that coolant flowing through the second passages enters and exits the block through the associated first passages.

  15. High performance amorphous selenium lateral photodetector

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, Shiva; Allec, Nicholas; Karim, Karim S.

    2012-03-01

    Lateral amorphous selenium (a-Se) detectors based on the metal-semiconductor-metal (MSM) device structure have been studied for indirect detector medical imaging applications. These detectors have raised interest due to their simple structure, ease of fabrication, high-speed, low dark current, low capacitance per unit area and better light utilization. The lateral device structure has a benefit that the electrode spacing may be easily controlled to reduce the required bias for a given desired electric field. In indirect conversion x-ray imaging, the scintillator is coupled to the top of the a-Se MSM photodetector, which itself is integrated on top of the thin-film-transistor (TFT) array. The carriers generated at the top surface of the a-Se layer experience a field that is parallel to the surface, and does not initially sweep them away from the surface. Therefore these carriers may recombine or get trapped in surface states and change the field at the surface, which may degrade the performance of the photodetector. In addition, due to the finite width of the electrodes, the fill factor of the device is less than unity. In this study we examine the effect of lateral drift of carriers and the fill factor on the photodetector performance. The impact of field magnitude on the performance is also investigated.

  16. High Performance Diesel Fueled Cabin Heater

    SciTech Connect

    Butcher, Tom

    2001-08-05

    Recent DOE-OHVT studies show that diesel emissions and fuel consumption can be greatly reduced at truck stops by switching from engine idle to auxiliary-fired heaters. Brookhaven National Laboratory (BNL) has studied high performance diesel burner designs that address the shortcomings of current low fire-rate burners. Initial test results suggest a real opportunity for the development of a truly advanced truck heating system. The BNL approach is to use a low pressure, air-atomized burner derived form burner designs used commonly in gas turbine combustors. This paper reviews the design and test results of the BNL diesel fueled cabin heater. The burner design is covered by U.S. Patent 6,102,687 and was issued to U.S. DOE on August 15, 2000.The development of several novel oil burner applications based on low-pressure air atomization is described. The atomizer used is a pre-filming, air blast nozzle of the type commonly used in gas turbine combustion. The air pressure used can b e as low as 1300 Pa and such pressure can be easily achieved with a fan. Advantages over conventional, pressure-atomized nozzles include ability to operate at low input rates without very small passages and much lower fuel pressure requirements. At very low firing rates the small passage sizes in pressure swirl nozzles lead to poor reliability and this factor has practically constrained these burners to firing rates over 14 kW. Air atomization can be used very effectively at low firing rates to overcome this concern. However, many air atomizer designs require pressures that can be achieved only with a compressor, greatly complicating the burner package and increasing cost. The work described in this paper has been aimed at the practical adaptation of low-pressure air atomization to low input oil burners. The objective of this work is the development of burners that can achieve the benefits of air atomization with air pressures practically achievable with a simple burner fan.

  17. Integrating advanced facades into high performance buildings

    SciTech Connect

    Selkowitz, Stephen E.

    2001-05-01

    Glass is a remarkable material but its functionality is significantly enhanced when it is processed or altered to provide added intrinsic capabilities. The overall performance of glass elements in a building can be further enhanced when they are designed to be part of a complete facade system. Finally the facade system delivers the greatest performance to the building owner and occupants when it becomes an essential element of a fully integrated building design. This presentation examines the growing interest in incorporating advanced glazing elements into more comprehensive facade and building systems in a manner that increases comfort, productivity and amenity for occupants, reduces operating costs for building owners, and contributes to improving the health of the planet by reducing overall energy use and negative environmental impacts. We explore the role of glazing systems in dynamic and responsive facades that provide the following functionality: Enhanced sun protection and cooling load control while improving thermal comfort and providing most of the light needed with daylighting; Enhanced air quality and reduced cooling loads using natural ventilation schemes employing the facade as an active air control element; Reduced operating costs by minimizing lighting, cooling and heating energy use by optimizing the daylighting-thermal tradeoffs; Net positive contributions to the energy balance of the building using integrated photovoltaic systems; Improved indoor environments leading to enhanced occupant health, comfort and performance. In addressing these issues facade system solutions must, of course, respect the constraints of latitude, location, solar orientation, acoustics, earthquake and fire safety, etc. Since climate and occupant needs are dynamic variables, in a high performance building the facade solution have the capacity to respond and adapt to these variable exterior conditions and to changing occupant needs. This responsive performance capability

  18. High Efficiency, High Performance Clothes Dryer

    SciTech Connect

    Peter Pescatore; Phil Carbone

    2005-03-31

    This program covered the development of two separate products; an electric heat pump clothes dryer and a modulating gas dryer. These development efforts were independent of one another and are presented in this report in two separate volumes. Volume 1 details the Heat Pump Dryer Development while Volume 2 details the Modulating Gas Dryer Development. In both product development efforts, the intent was to develop high efficiency, high performance designs that would be attractive to US consumers. Working with Whirlpool Corporation as our commercial partner, TIAX applied this approach of satisfying consumer needs throughout the Product Development Process for both dryer designs. Heat pump clothes dryers have been in existence for years, especially in Europe, but have not been able to penetrate the market. This has been especially true in the US market where no volume production heat pump dryers are available. The issue has typically been around two key areas: cost and performance. Cost is a given in that a heat pump clothes dryer has numerous additional components associated with it. While heat pump dryers have been able to achieve significant energy savings compared to standard electric resistance dryers (over 50% in some cases), designs to date have been hampered by excessively long dry times, a major market driver in the US. The development work done on the heat pump dryer over the course of this program led to a demonstration dryer that delivered the following performance characteristics: (1) 40-50% energy savings on large loads with 35 F lower fabric temperatures and similar dry times; (2) 10-30 F reduction in fabric temperature for delicate loads with up to 50% energy savings and 30-40% time savings; (3) Improved fabric temperature uniformity; and (4) Robust performance across a range of vent restrictions. For the gas dryer development, the concept developed was one of modulating the gas flow to the dryer throughout the dry cycle. Through heat modulation in a

  19. High Performance Commercial Fenestration Framing Systems

    SciTech Connect

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  20. High performance computing environment for multidimensional image analysis

    PubMed Central

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-01-01

    Background The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. Results We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478× speedup. Conclusion Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets. PMID:17634099

  1. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data. PMID:25313296

  2. Optimizing high performance computing workflow for protein functional annotation.

    PubMed

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  3. High performance simulation of environmental tracers in heterogeneous domains.

    PubMed

    Gardner, William P; Hammond, Glenn; Lichtner, Peter

    2015-04-01

    In this study, we use PFLOTRAN, a highly scalable, parallel, flow, and reactive transport code to simulate the concentrations of 3H, 3He, CFC-11, CFC-12, CFC-113, SF6, 39Ar, and the mean groundwater age in heterogeneous fields on grids with an excess of 10 million nodes. We utilize this computational platform to simulate the concentration of multiple tracers in high-resolution, heterogeneous 2D and 3D domains, and calculate tracer-derived ages. Tracer-derived ages show systematic biases toward younger ages when the groundwater age distribution contains water older than the maximum tracer age. The deviation of the tracer-derived age distribution from the true groundwater age distribution increases with increasing heterogeneity of the system. However, the effect of heterogeneity is diminished as the mean travel time gets closer to the tracer age limit. Age distributions in 3D domains differ significantly from 2D domains. 3D simulations show decreased mean age, and less variance in age distribution for identical heterogeneity statistics. High-performance computing allows for investigation of tracer and groundwater age systematics in high-resolution domains, providing a platform for understanding and utilizing environmental tracer and groundwater age information in heterogeneous 3D systems. PMID:24372403

  4. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  5. High performance simulation of environmental tracers in heterogeneous domains.

    PubMed

    Gardner, William P; Hammond, Glenn; Lichtner, Peter

    2015-04-01

    In this study, we use PFLOTRAN, a highly scalable, parallel, flow, and reactive transport code to simulate the concentrations of 3H, 3He, CFC-11, CFC-12, CFC-113, SF6, 39Ar, and the mean groundwater age in heterogeneous fields on grids with an excess of 10 million nodes. We utilize this computational platform to simulate the concentration of multiple tracers in high-resolution, heterogeneous 2D and 3D domains, and calculate tracer-derived ages. Tracer-derived ages show systematic biases toward younger ages when the groundwater age distribution contains water older than the maximum tracer age. The deviation of the tracer-derived age distribution from the true groundwater age distribution increases with increasing heterogeneity of the system. However, the effect of heterogeneity is diminished as the mean travel time gets closer to the tracer age limit. Age distributions in 3D domains differ significantly from 2D domains. 3D simulations show decreased mean age, and less variance in age distribution for identical heterogeneity statistics. High-performance computing allows for investigation of tracer and groundwater age systematics in high-resolution domains, providing a platform for understanding and utilizing environmental tracer and groundwater age information in heterogeneous 3D systems.

  6. The design of linear algebra libraries for high performance computers

    SciTech Connect

    Dongarra, J.J. |; Walker, D.W.

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  7. Transforming Power Grid Operations via High Performance Computing

    SciTech Connect

    Huang, Zhenyu; Nieplocha, Jarek

    2008-07-31

    Past power grid blackout events revealed the adequacy of grid operations in responding to adverse situations partially due to low computational efficiency in grid operation functions. High performance computing (HPC) provides a promising solution to this problem. HPC applications in power grid computation also become necessary to take advantage of parallel computing platforms as the computer industry is undergoing a significant change from the traditional single-processor environment to an era for multi-processor computing platforms. HPC applications to power grid operations are multi-fold. HPC can improve today’s grid operation functions like state estimation and contingency analysis and reduce the solution time from minutes to seconds, comparable to SCADA measurement cycles. HPC also enables the integration of dynamic analysis into real-time grid operations. Dynamic state estimation, look-ahead dynamic simulation and real-time dynamic contingency analysis can be implemented and would be three key dynamic functions in future control centers. HPC applications call for better decision support tools, which also need HPC support to handle large volume of data and large number of cases. Given the complexity of the grid and the sheer number of possible configurations, HPC is considered to be an indispensible element in the next generation control centers.

  8. Multimethod communication for high-performance metacomputing applications

    SciTech Connect

    Foster, I.; Geisler, J.; Tuecke, S.; Kesselman, C.

    1996-12-31

    Metacomputing systems use high-speed networks to connect supercomputers, mass storage systems, scientific instruments, and display devices with the objective of enabling parallel applications to access geographically distributed computing resources. However, experience shows that high performance often can be achieved only if applications can integrate diverse communication substrates, transport mechanisms, and protocols, chosen according to where communication is directed, what is communicated, or when communication is performed. In this article, we describe a software architecture that addresses this requirement. This architecture allows multiple communication methods to be supported transparently in a single application, with either automatic or user-specified selection criteria guiding the methods used for each communication. We describe an implementation of this architecture, based on the Nexus communication library, and use this implementation to evaluate performance issues. The implementation supported a wide variety of applications in the I-WAY metacomputing experiment at Supercomputing 95; we use one of these applications to provide a quantitative demonstration of the advantages of multimethod communication in a heterogeneous networked environment.

  9. Scout: high-performance heterogeneous computing made simple

    SciTech Connect

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  10. High Performance GPU-Based Fourier Volume Rendering

    PubMed Central

    Abdellah, Marwan; Eldeib, Ayman; Sharawi, Amr

    2015-01-01

    Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of its 𝒪(N2log⁡N) time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that are 𝒪(N3) computationally complex. Relying on the Fourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look like X-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures. PMID:25866499

  11. The Nanoelectric Modeling Tool (NEMO) and Its Expansion to High Performance Parallel Computing

    NASA Technical Reports Server (NTRS)

    Klimeck, G.; Bowen, C.; Boykin, T.; Oyafuso, F.; Salazar-Lazaro, C.; Stoica, A.; Cwik, T.

    1998-01-01

    Material variations on an atomic scale enable the quantum mechanical functionality of devices such as resonant tunneling diodes (RTDs), quantum well infrared photodetectors (QWIPs), quantum well lasers, and heterostructure field effect transistors (HFETs).

  12. Performance optimization of RoF systems using 120° hybrid coupler for OSSB signal against third order intermodulation

    NASA Astrophysics Data System (ADS)

    Kumar, Parvin; Sharma, Sanjay Kumar; Singla, Shelly

    2016-10-01

    The performance of radio over fiber (RoF) system with dual drive Mach Zehender modulator has been optimized against third order intermodulation distortion by using 120° hybrid coupler in transmission system. Signal to Noise Distortion ratio (SNDR) has been evaluated and a performance comparison is also drawn for the systems based on 90° and 120° hybrid coupler in both noise and intermodulation distortion dominant environment. The SNDR is efficiently improved by employing 120° hybrid coupler in noise dominant and intermodulation distortion dominant environment. An improvement of 4.86 dB is obtained in the maximum SNDR with 120° hybrid coupler is obtained over at 20 km optical fiber length compared with a 90° hybrid coupler based system. A significant reduction of third order intermodulation power at receiver has also been observed with 120° hybrid coupler.

  13. Thermal and Structural Analysis of Co-axial Coupler used in High Power Helix Traveling-Wave Tube

    NASA Astrophysics Data System (ADS)

    Gahlaut, Vishant; Alvi, Parvez Ahmad; Ghosh, Sanjay Kumar

    2014-07-01

    In traveling-wave tubes (TWTs), coaxial couplers or window assemblies are used for coupling milliwatt (mW) to hundreds of watts of average power. For the proper transformation of impedance of interaction structure to the standard connector, coaxial couplers are suitably modeled as multi-section coaxial transformer. Due to high average power propagation, center conductor of coupler gets heated and the dimensions of multi-section coupler get deformed from its cold condition which causes impedance mismatch and increase of thermal load. Due to impedance mismatch, reflection of RF signal occurs from the couplers causing oscillation and finally leads of destruction of the TWT. This paper presents the thermal and structural analysis of coaxial coupler to quantify the temperature distribution at different regions, deformation of cold dimensions and stress due to material property of window disc.

  14. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  15. Toward a common component architecture for high-performance scientific computing

    SciTech Connect

    Armstrong, R; Gannon, D; Geist, A; Katarzyna, K; Kohn, S; McInnes, L; Parker, S; Smolinski, B

    1999-06-09

    This paper describes work in progress to develop a standard for interoperability among high-performance scientific components. This research stems from growing recognition that the scientific community must better manage the complexity of multidisciplinary simulations and better address scalable performance issues on parallel and distributed architectures. Driving forces are the need for fast connections among components that perform numerically intensive work and parallel collective interactions among components that use multiple processes or threads. This paper focuses on the areas we believe are most crucial for such interactions, namely an interface definition language that supports scientific abstractions for specifying component interfaces and a ports connection model for specifying component interactions.

  16. A Queue Simulation Tool for a High Performance Scientific Computing Center

    NASA Technical Reports Server (NTRS)

    Spear, Carrie; McGalliard, James

    2007-01-01

    The NASA Center for Computational Sciences (NCCS) at the Goddard Space Flight Center provides high performance highly parallel processors, mass storage, and supporting infrastructure to a community of computational Earth and space scientists. Long running (days) and highly parallel (hundreds of CPUs) jobs are common in the workload. NCCS management structures batch queues and allocates resources to optimize system use and prioritize workloads. NCCS technical staff use a locally developed discrete event simulation tool to model the impacts of evolving workloads, potential system upgrades, alternative queue structures and resource allocation policies.

  17. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  18. Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data

    SciTech Connect

    Rubel, Oliver; Prabhat, Mr.; Wu, Kesheng; Childs, Hank; Meredith, Jeremy; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Ahern, Sean; Weber, Gunther H.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2008-08-28

    Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset.

  19. High Performance Home Building Guide for Habitat for Humanity Affiliates

    SciTech Connect

    Lindsey Marburger

    2010-10-01

    This guide covers basic principles of high performance Habitat construction, steps to achieving high performance Habitat construction, resources to help improve building practices, materials, etc., and affiliate profiles and recommendations.

  20. Simulation of bended planar waveguides for optical bus-couplers

    NASA Astrophysics Data System (ADS)

    Lorenz, Lukas; Nieweglowski, Krzysztof; Wolter, Klaus-Jürgen; Bock, Karlheinz

    2016-04-01

    In our work an optical bus-coupler is proposed, which enables easy bidirectional connection between two waveguides without interrupting the bus using a core-to-core coupling principle. With bended waveguides the coupling ratio can be tuned by adjusting the overlap area of the two cores. In order to ensure large overlap areas at short coupling lengths, the waveguides have rectangular cross sections. To examine the feasibility of this coupling concept a simulation was performed, which is presented in this paper. Due to multimode waveguides, used in short range data communication, a non-sequential ray tracing simulation is reasonable. Simulations revealed that the bending of the waveguide causes a redistribution of the energy within the core. Small radii push the main energy to the outer region of the core increasing the coupling efficiency. On the other hand, at excessive lowered bend radii additional losses occur (due to a coupling into the cladding), which is why an optimum has to be found. Based on the simulation results it is possible to derive requirements and design rules for the coupling element.

  1. Insert earphone modeling and measurement by IEC-60711 coupler.

    PubMed

    Huang, Chen-Hung; Pawar, S; Hong, Zih-Jyun; Huang, Jin

    2011-02-01

    In this study, an analytical model based on the equivalent circuit method is developed to simulate the frequency response of an insert earphone. This earphone incorporates a miniature loudspeaker commonly used in computer, communication, consumer, and car electronics. Through the laser triangulation method, electroacoustic parameters of a miniature loudspeaker are obtained. Several earphone design configurations are analyzed in accordance with the open and closed states of front leakage hole, vent, and back leakage hole. To validate the analysis, an insert earphone that is attached to IEC-60711 coupler and a specially designed fixture tube is experimentally measured for frequency response using electroacoustic equipment in the air. Simulation and experimental results show good agreement over the complete audible frequency range. Analysis indicates that states of front leakage hole, vent, and back leakage hole of an insert earphone have significant effects on frequency response. The front leakage hole affects the low frequency response, whereas the vent affects the fundamental resonance. Detailed analysis has been provided to further improve the design of insert earphones. PMID:21342831

  2. Multiplexed polymer surface plasmon sensor with integrated optical coupler

    NASA Astrophysics Data System (ADS)

    Pyo, Hyeon-Bong; Park, Se Ho; Chung, Kwang Hyo; Choi, Chang Auck

    2005-11-01

    In this paper, we describe a novel multiplexed surface plasmon resonance (SPR) sensor which is made of cyclic olefin copolymers (COCs, TOPAS TM). This material has excellent chemical resistance, low water uptake (< 0.01%), and high refractive index (n He- Ne=1.53) suitable to use as an optical coupler (prism) as well as a sensor substrate. We fabricated a standard slide glass sized, prism integrated, and injection molded COC-SPR sensor which are being applied toward the multiplexed detection of DNA single nucleotide polymorphism (SNP). To evaluate the sensitivity of COC-SPR sensor, we first patterned MgF II on gold-coated COC-SPR sensor and observed the shift of minimum reflectivity (SPR dip) in pixel address. As incident light source we used an expanded, collimated, rectangular shaped He-Ne laser, with a diffuser for beam homogenization. With expanded laser beam we varied incident angle so that the angular shift is expressed as the darkest pixel shift on CCD. For optimized SPR characteristics and sensor configuration, analytical calculations (Fresnel equation) were performed, and the best SPR conditions were found to be d Au~48 nm at wavelength λ=633 nm with respected resonance angle at θ SPR =44.2° for COC-SPR sensor.

  3. Polymer waveguide couplers based on metal nanoparticle-polymer nanocomposites.

    PubMed

    Signoretto, M; Suárez, I; Chirvony, V S; Abargues, R; Rodríguez-Cantó, P J; Martínez-Pastor, J

    2015-11-27

    In this work Au nanoparticles (AuNPs) are incorporated into poly(methyl methacrylate) (PMMA) waveguides to develop optical couplers that are compatible with planar organic polymer photonics. A method for growing AuNPs (of 10 to 100 nm in size) inside the commercially available Novolak resist is proposed with the intention of tuning the plasmon resonance and the absorption/scattering efficiencies inside the patterned structures. The refractive index of the MNP-Novolak nanocomposite (MNPs: noble metal nanoparticles) is carefully analysed both experimentally and numerically in order to find the appropriate fabrication conditions (filling factor and growth time) to optimize the scattering cross section at a desired wavelength. Then the nanocomposite is patterned inside a PMMA waveguide to exploit its scattering properties to couple and guide a normal incident laser light beam along the polymer. In this way, light coupling is experimentally demonstrated in a broad wavelength range (404-780 nm). Due to the elliptical shape of the MNPs the nanocomposite demonstrates a birefringence, which enhances the coupling to the TE mode up to efficiencies of around 1%. PMID:26526708

  4. High-Performance I/O: HDF5 for Lattice QCD

    SciTech Connect

    Kurth, Thorsten; Pochinsky, Andrew; Sarje, Abhinav; Syritsyn, Sergey; Walker-Loud, Andre

    2015-01-01

    Practitioners of lattice QCD/QFT have been some of the primary pioneer users of the state-of-the-art high-performance-computing systems, and contribute towards the stress tests of such new machines as soon as they become available. As with all aspects of high-performance-computing, I/O is becoming an increasingly specialized component of these systems. In order to take advantage of the latest available high-performance I/O infrastructure, to ensure reliability and backwards compatibility of data files, and to help unify the data structures used in lattice codes, we have incorporated parallel HDF5 I/O into the SciDAC supported USQCD software stack. Here we present the design and implementation of this I/O framework. Our HDF5 implementation outperforms optimized QIO at the 10-20% level and leaves room for further improvement by utilizing appropriate dataset chunking.

  5. HEATR project: ATR algorithm parallelization

    NASA Astrophysics Data System (ADS)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  6. Study on compactness of planar waveguide based integrated optic couplers using tooth shaped grating assisted geometry

    NASA Astrophysics Data System (ADS)

    Deka, Bidyut; Dutta, Aradhana; Sahu, Partha P.

    2013-11-01

    The introduction of Photonic Integrated Devices (PID) for applications in high speed optical networks providing multiple services to more number of users is indispensable as this requires large scale integration (LSI) and the miniaturization of PID device components to microscale platform has attracted immense attention from the researchers and entrepreneurs. In this paper, we present a comparative study on compactness of basic PID components using tooth shaped grating assisted (TSGA) geometry. The basic PID components such as Directional Coupler (DC), two mode interference (TMI) coupler and multimode interference (MMI) coupler have been designed using TSGA geometry in the coupling region and the coupling characteristics for the same have been estimated using a mathematical model based on sinusoidal mode simple effective index method (SM-SEIM). The dependence of modal power in the coupling region on the waveguide separation gap and coupling gap refractive index has been studied. From the estimated dependences of beat length and access waveguide length on waveguide separation gap with permissible propagation loss ~0.15 dB/cm, it has been found that the grating assisted TMI coupler (GA-TMI) is ~0.5 times lower than that of grating assisted DC (GA-DC) and ~0.44 times lower than grating assisted MMI (GA-MMI) coupler. Further, it is seen that the device length including access waveguide length of GA-MMI coupler is less than that of GA-TMI coupler and GA-DC. The SM-SEIM based numerical results are then compared with beam propagation method (BPM) results obtained by using commercially available optiBPM software.

  7. Benchmarking: More Aspects of High Performance Computing

    SciTech Connect

    Ravindrudu, Rahul

    2004-01-01

    The original HPL algorithm makes the assumption that all data can be fit entirely in the main memory. This assumption will obviously give a good performance due to the absence of disk I/O. However, not all applications can fit their entire data in memory. These applications which require a fair amount of I/O to move data to and from main memory and secondary storage, are more indicative of usage of an Massively Parallel Processor (MPP) System. Given this scenario a well designed I/O architecture will play a significant part in the performance of the MPP System on regular jobs. And, this is not represented in the current Benchmark. The modified HPL algorithm is hoped to be a step in filling this void. The most important factor in the performance of out-of-core algorithms is the actual I/O operations performed and their efficiency in transferring data to/from main memory and disk, Various methods were introduced in the report for performing I/O operations. The I/O method to use depends on the design of the out-of-core algorithm. Conversely, the performance of the out-of-core algorithm is affected by the choice of I/O operations. This implies, good performance is achieved when I/O efficiency is closely tied with the out-of-core algorithms. The out-of-core algorithms must be designed from the start. It is easily observed in the timings for various plots, that I/O plays a significant part in the overall execution time. This leads to an important conclusion, retro-fitting an existing code may not be the best choice. The right-looking algorithm selected for the LU factorization is a recursive algorithm and performs well when the entire dataset is in memory. At each stage of the loop the entire trailing submatrix is read into memory panel by panel. This gives a polynomial number of I/O reads and writes. If the left-looking algorithm was selected for the main loop, the number of I/O operations involved will be linear on the number of columns. This is due to the data access

  8. High-performance commercial building systems

    SciTech Connect

    Selkowitz, Stephen

    2003-10-01

    This report summarizes key technical accomplishments resulting from the three year PIER-funded R&D program, ''High Performance Commercial Building Systems'' (HPCBS). The program targets the commercial building sector in California, an end-use sector that accounts for about one-third of all California electricity consumption and an even larger fraction of peak demand, at a cost of over $10B/year. Commercial buildings also have a major impact on occupant health, comfort and productivity. Building design and operations practices that influence energy use are deeply engrained in a fragmented, risk-averse industry that is slow to change. Although California's aggressive standards efforts have resulted in new buildings designed to use less energy than those constructed 20 years ago, the actual savings realized are still well below technical and economic potentials. The broad goal of this program is to develop and deploy a set of energy-saving technologies, strategies, and techniques, and improve processes for designing, commissioning, and operating commercial buildings, while improving health, comfort, and performance of occupants, all in a manner consistent with sound economic investment practices. Results are to be broadly applicable to the commercial sector for different building sizes and types, e.g. offices and schools, for different classes of ownership, both public and private, and for owner-occupied as well as speculative buildings. The program aims to facilitate significant electricity use savings in the California commercial sector by 2015, while assuring that these savings are affordable and promote high quality indoor environments. The five linked technical program elements contain 14 projects with 41 distinct R&D tasks. Collectively they form a comprehensive Research, Development, and Demonstration (RD&D) program with the potential to capture large savings in the commercial building sector, providing significant economic benefits to building owners and

  9. High-performance computing for flight vehicles; Proceedings of the Symposium, Washington, Dec. 7-9, 1992

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K. (Editor); Venneri, Samuel L. (Editor)

    1992-01-01

    The present conference discusses high-performance computing systems for flight vehicles, large-scale simulations on high-performance flight computers and software, multidisciplinary and design/optimization applications of computers, computational electromagnetics and acoustics, the simulation of aircraft powerplant turbomachinery and reacting flows, and flow calculations on parallel machines. Also discussed are direct flow simulation Monte Carlo methods, structural mechanics sensitivity and fracture calculations on parallel machines, grid-generation and advanced algorithms for CFD, advanced solid-mechanics and structures applications, and advancements in flow visualization technology and neural networks.

  10. A study of polaritonic transparency in couplers made from excitonic materials

    SciTech Connect

    Singh, Mahi R.; Racknor, Chris

    2015-03-14

    We have studied light matter interaction in quantum dot and exciton-polaritonic coupler hybrid systems. The coupler is made by embedding two slabs of an excitonic material (CdS) into a host excitonic material (ZnO). An ensemble of non-interacting quantum dots is doped in the coupler. The bound exciton polariton states are calculated in the coupler using the transfer matrix method in the presence of the coupling between the external light (photons) and excitons. These bound exciton-polaritons interact with the excitons present in the quantum dots and the coupler is acting as a reservoir. The Schrödinger equation method has been used to calculate the absorption coefficient in quantum dots. It is found that when the distance between two slabs (CdS) is greater than decay length of evanescent waves the absorption spectrum has two peaks and one minimum. The minimum corresponds to a transparent state in the system. However, when the distance between the slabs is smaller than the decay length of evanescent waves, the absorption spectra has three peaks and two transparent states. In other words, one transparent state can be switched to two transparent states when the distance between the two layers is modified. This could be achieved by applying stress and strain fields. It is also found that transparent states can be switched on and off by applying an external control laser field.

  11. Heat-driven thermoacoustic cryocooler operating at liquid hydrogen temperature with a unique coupler

    NASA Astrophysics Data System (ADS)

    Hu, J. Y.; Luo, E. C.; Li, S. F.; Yu, B.; Dai, W.

    2008-05-01

    A heat-driven thermoacoustic cryocooler is constructed. A unique coupler composed of a tube, reservoir, and elastic diaphragm is introduced to couple a traveling-wave thermoacoustic engine (TE) and two-stage pulse tube refrigerator (PTR). The amplitude of the pressure wave generated in the engine is first amplified in the coupler and the wave then passes into the refrigerator to pump heat. The TE uses nitrogen as its working gas and the PTR still uses helium as its working gas. With this coupler, the efficiency of the system is doubled. The engine and coupler match at a much lower operating frequency, which is of great benefit for the PTR to obtain a lower cooling temperature. The coupling place between the coupler and engine is also optimized. The onset problem is effectively solved. With these improvements, the heat-driven thermoacoustic cryocooler reaches a lowest temperature of 18.1K, which is the demonstration of heat-driven thermoacoustic refrigeration technology used for cooling at liquid hydrogen temperatures.

  12. RF Conditioning and Testing of Fundamental Power Couplers for SNS Superconducting Cavity Production

    SciTech Connect

    M. Stirbet; G.K. Davis; M. A. Drury; C. Grenoble; J. Henry; G. Myneni; T. Powers; K. Wilson; M. Wiseman; I.E. Campisi; Y.W. Kang; D. Stout

    2005-05-16

    The Spallation Neutron Source (SNS) makes use of 33 medium beta (0.61) and 48 high beta (0.81) superconducting cavities. Each cavity is equipped with a fundamental power coupler, which should withstand the full klystron power of 550 kW in full reflection for the duration of an RF pulse of 1.3 msec at 60 Hz repetition rate. Before assembly to a superconducting cavity, the vacuum components of the coupler are submitted to acceptance procedures consisting of preliminary quality assessments, cleaning and clean room assembly, vacuum leak checks and baking under vacuum, followed by conditioning and RF high power testing. Similar acceptance procedures (except clean room assembly and baking) were applied for the airside components of the coupler. All 81 fundamental power couplers for SNS superconducting cavity production have been RF power tested at JLAB Newport News and, beginning in April 2004 at SNS Oak Ridge. This paper gives details of coupler processing and RF high power-assessed performances.

  13. Magnetic field sensor based on cascaded microfiber coupler with magnetic fluid

    NASA Astrophysics Data System (ADS)

    Mao, Lianmin; Pu, Shengli; Su, Delong; Wang, Zhaofang; Zeng, Xianglong; Lahoubi, Mahieddine

    2016-09-01

    A kind of magnetic field sensor based on cascaded microfiber coupler with magnetic fluid is proposed and experimentally demonstrated. The magnetic fluid is utilized as the cladding of the fused regions of the cascaded microfiber coupler. As the interference valley wavelength of the sensing structure is sensitive to the ambient variation, considering the magnetic-field-dependent refractive index of magnetic fluid, the proposed structure is employed for magnetic field sensing. The effective coupling length for each coupling region of the as-fabricated cascaded microfiber coupler is 6031 μm. The achieved sensitivity is 125 pm/Oe, which is about three times larger than that of the previously similar structure based on the single microfiber coupler. Experimental results indicate that the sensing sensitivity can be easily improved by increasing the effective coupling length or cascading more microfiber couplers. The proposed magnetic field sensor is attractive due to its low cost, immunity to electromagnetic interference, as well as high sensitivity, which also has the potentials in other tunable all-fiber photonic devices, such as filter.

  14. Design and fabrication of adiabatic vertical couplers for hybrid integration by flip-chip bonding

    NASA Astrophysics Data System (ADS)

    Mu, Jinfeng; Sefunc, Mustafa A.; Xu, Bojian; Dijkstra, Meindert; García-Blanco, Sonia M.

    2016-02-01

    Rare-earth ion doped crystalline potassium double tungstates, such as KY(WO4)2, KLu(WO4)2 and KY(WO4)2, exhibit many properties that make them promising candidates for the realization of lasers and amplifiers in integrated photonics. One of the key challenges for the hybrid integration of different photonic platforms remains the design and fabrication of low-loss and fabrication tolerant couplers for transferring light between different waveguides. In this paper, adiabatic vertical couplers realized by flip-chip bonding of polymer waveguides to Si3N4 devices are designed, fabricated and tested. An efficient design flow combining 2D and 3D simulations was proposed and its validity was demonstrated. The vertical couplers will ultimately be used for the integration of erbium doped KY(WO4)2 waveguides with passive platforms. The designed couplers exhibit less than 0.5 dB losses at adiabatic angles and below 1 dB loss for ±0.5 μm lateral misalignment. The fabricated vertical couplers show less than 1dB losses in average for different adiabatic angles of Si3N4 tapers, which is in good quantitative agreement with the simulations.

  15. Characteristics Of Fused Couplers Below Cut-Off

    NASA Astrophysics Data System (ADS)

    Meyer, T. J.; Tekippe, V. J.

    1989-02-01

    A number of different architectures are being explored for the utilization of optical fiber in the subscriber loop. In addition to reliability and maintainability, cost is a prime consideration since full implementation of fiber in the local loop will not occur until it is economically viable. It is becoming increasingly clear that in order to accommodate a number of ISDN applications, including high definition television (HDTV), singlemode fiber with a singlemode laser at the terminal end will be required. The situation at the subscriber end is quite different, however. The data rates are expected to be low on the return path to allow for POTS ( plain old telephone service) and some data transfer. When this requirement is combined with cost and reliability considerations, the inexpensive lasers developed for the CD (compact disk) market become quite attractive. The biggest disadvantage of this source is that the fiber which is optimized for singlemode operation at 1300nm tends to be multimode in the 800nm band where these lasers operate. Previous papers have considered such effects as modal noise and pulse dispersion when using these lasers with fiber that is singlemode in the 1300nm band.[1] Another consideration is the passive components required to implement such an architecture. Figure 1 shows a typical bidirectional design with full duplex operation on a single fiber. The key component is the 800/1300 wavelength division multiplexer/demultiplexer (WDM). Because of the multimode nature of the fiber in the 800nm band, all fiber approaches to fabricating the WDM, such as the fused beconical taper (FBT) approach, raise new issues which are not encountered, for example, with 1300/1500nm WDM's.[2] In this paper we discuss the effects of the multimode behavior of the fiber on the performance of fused couplers and WDM's.

  16. High performance ultrasonic field simulation on complex geometries

    NASA Astrophysics Data System (ADS)

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  17. Archon: A modern controller for high performance astronomical CCDs

    NASA Astrophysics Data System (ADS)

    Bredthauer, Greg

    2014-08-01

    The rapid evolution of commercial FPGAs and analog ICs has enabled the development of Archon, a new modular high performance astronomical CCD controller. CCD outputs are digitized by 16-bit 100 MHz ADCs with differential AC-coupled preamplifiers. The raw data stream from an ADC can be stored in parallel with standard image data into three onboard 512 MB frame buffers. Pixel values are computed using digital correlated double sampling. At low pixel rates (< 1 MHz), the dynamic range achievable by averaging hundreds of ADC samples per pixel can exceed 16 bits, so an option to store 32 bits per pixel is provided. CCD clocks are generated by 14-bit 100 MHz DACs. The scripted timing core driving the clocks can generate a new target voltage for each clock every 10 ns, and the clock slew rates are individually programmable. CCD biases are derived from 16-bit DACs, are continuously monitored for voltage and current, and power up and down in a customizable sequence. Communication between the controller and a host computer occurs over a gigabit Ethernet interface (fiber or copper). A CCD configuration is specified by a simple text file. Together, these features simplify the tuning and debugging of scientific CCDs, and enable CCD-limited imaging. I present details of the controller architecture, examples of CCD tuning, and measured performance data of the controller alone (dynamic range of 108 dB at 100 kHz and 98 dB at 1 MHz) and in combination with an STA1600LN CCD.

  18. Industrial applications of high-performance computing for phylogeny reconstruction

    NASA Astrophysics Data System (ADS)

    Bader, David A.; Moret, Bernard M.; Vawter, Lisa

    2001-07-01

    parallel and high-performance computers.

  19. High performance computing methods for the integration and analysis of biomedical data using SAS.

    PubMed

    Brown, Justin R; Dinu, Valentin

    2013-12-01

    From microarrays and next generation sequencing to clinical records, the amount of biomedical data is growing at an exponential rate. Handling and analyzing these large amounts of data demands that computing power and methodologies keep pace. The goal of this paper is to illustrate how high performance computing methods in SAS can be easily implemented without the need of extensive computer programming knowledge or access to supercomputing clusters to help address the challenges posed by large biomedical datasets. We illustrate the utility of database connectivity, pipeline parallelism, multi-core parallel process and distributed processing across multiple machines. Simulation results are presented for parallel and distributed processing. Finally, a discussion of the costs and benefits of such methods compared to traditional HPC supercomputing clusters is given.

  20. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  1. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  2. Concentric ring flywheel with hooked ring carbon fiber separator/torque coupler

    DOEpatents

    Kuklo, Thomas C.

    1999-01-01

    A concentric ring flywheel with expandable separators, which function as torque couplers, between the rings to take up the gap formed between adjacent rings due to differential expansion between different radius rings during rotation of the flywheel. The expandable separators or torque couplers include a hook-like section at an upper end which is positioned over an inner ring and a shelf-like or flange section at a lower end onto which the next adjacent outer ring is positioned. As the concentric rings are rotated the gap formed by the differential expansion there between is partially taken up by the expandable separators or torque couplers to maintain torque and centering attachment of the concentric rings.

  3. Concentric ring flywheel with hooked ring carbon fiber separator/torque coupler

    DOEpatents

    Kuklo, T.C.

    1999-07-20

    A concentric ring flywheel with expandable separators, which function as torque couplers, between the rings to take up the gap formed between adjacent rings due to differential expansion between different radius rings during rotation of the flywheel. The expandable separators or torque couplers include a hook-like section at an upper end which is positioned over an inner ring and a shelf-like or flange section at a lower end onto which the next adjacent outer ring is positioned. As the concentric rings are rotated the gap formed by the differential expansion there between is partially taken up by the expandable separators or torque couplers to maintain torque and centering attachment of the concentric rings. 2 figs.

  4. Design of Input Coupler and Output Window for Ka-Band Gyro-TWT

    NASA Astrophysics Data System (ADS)

    Alaria, M. K.; Singh, Khushbu; Choyal, Y.; Sinha, A. K.

    2013-10-01

    The design of input coupler with loaded interaction structure for Ka-band gyro traveling wave tube (gyro-TWT) has been carried out using Ansoft HFSS to operate in the TE11 mode. The return loss (S11) and transmission loss (S21) of the Ka-band gyro-TWT input coupler have been found -27.3 and -0.05 dB respectively. The design of output window for Ka-band gyro-TWT has been carried out using CST microwave studio. In this paper thermal analysis of the input coupler for Ka-band gyro-TWT has also been carried out using ANSYS software. In the simulation results, the temperature on the ceramic disc of window does not exceed 80 °C and found in safe limit. The optimized design of input and output window for gyro-TWT allows low heat loads in the ceramic and consequently low temperature increase.

  5. Copper Prototype Measurements of the HOM, LOM and SOM Couplers for the ILC Crab Cavity

    SciTech Connect

    Burt, G.; Ambattu, P.K.; Dexter, A.C.; Bellantoni, L.; Goudket, P.; McIntosh, P.A.; Li, Z.; Xiao, L.; /SLAC

    2008-06-23

    The ILC Crab Cavity is positioned close to the IP and delivered luminosity is very sensitive to the wakefields induced in it by the beam. A set of couplers were designed to couple to and damp the spurious modes of the crab cavity. As the crab cavity operates using a dipole mode, it has different damping requirements from an accelerating cavity. A separate coupler is required for the monopole modes below the operating frequency of 3.9 GHz (known as the LOMs), the opposite polarization of the operating mode (the SOM), and the modes above the operating frequency (the HOMs). Prototypes of each of these couplers have been manufactured out of copper and measured attached to an aluminum nine cell prototype of the cavity and their external Q factors were measured. The results were found to agree well with numerical simulations.

  6. Copper Prototype Measurements of the HOM, LOM And SOM Couplers for the ILC Crab Cavity

    SciTech Connect

    Burt, G.; Ambattu, P.K.; Dexter, A.C.; Bellantoni, L.; Goudket, P.; McIntosh, P.A.; Li, Z.; Xiao, L.; /SLAC

    2011-11-04

    The ILC Crab Cavity is positioned close to the IP and delivered luminosity is very sensitive to the wakefields induced in it by the beam. A set of couplers were designed to couple to and damp the spurious modes of the crab cavity. As the crab cavity operates using a dipole mode, it has different damping requirements from an accelerating cavity. A separate coupler is required for the monopole modes below the operating frequency of 3.9 GHz (known as the LOMs), the opposite polarization of the operating mode (the SOM), and the modes above the operating frequency (the HOMs). Prototypes of each of these couplers have been manufactured out of copper and measured attached to an aluminum nine cell prototype of the cavity and their external Q factors were measured. The results were found to agree well with numerical simulations.

  7. Demonstration of a cavity coupler based on a resonant waveguide grating.

    PubMed

    Brückner, Frank; Friedrich, Daniel; Clausnitzer, Tina; Burmeister, Oliver; Britzger, Michael; Kley, Ernst-Bernhard; Danzmann, Karsten; Tünnermann, Andreas; Schnabel, Roman

    2009-01-01

    Thermal noise in multilayer optical coatings may not only limit the sensitivity of future gravitational wave detectors in their most sensitive frequency band but is also a major impediment for experiments that aim to reach the standard quantum limit or to cool mechanical systems to their quantum ground state. Here, we present the experimental realization and characterization of a cavity coupler, which is based on a surface relief guided ode resonant grating. Since the required thickness of the dielectric coating is dramatically decreased compared to conventional mirrors, it is expected to provide low mechanical loss and, thus, low thermal noise. The cavity coupler was incorporated into a Fabry-Perot resonator together with a conventional high quality mirror. The finesse of this cavity was measured to be F = 657, which corresponds to a coupler reflectivity of R = 99.08 %. PMID:19129884

  8. Testing of HOM coupler designs on a single cell niobium cavity

    SciTech Connect

    Peter Kneisel; Gianluigi Ciovati; ganapati rao myneni; Genfa Wu; Jacek Sekutowicz

    2005-05-01

    Coaxial higher order mode (HOM) couplers were developed initially for HERA cavities and subsequently for TESLA cavities. They were adopted later for SNS and Jlab upgrade cavities. The principle of operation is the rejection of the fundamental mode by the tunable filter and the transmission of the HOMs. It has been recognized recently that for continuous wave or high duty factor applications of the TESLA coupler the output pick-up probe must stay superconducting in order to avoid its heating by the fundamental mode residual magnetic field leading to deterioration of the cavity quality factor. In addition, the thermal conduction of existing rf feedthrough designs is only marginally sufficient to keep even the niobium probe tip superconducting in cw operation. We have equipped a single-cell niobium cavity with the modified HOM couplers and tested the new designs by measuring Q vs Eacc behavior at 2 K for different feedthroughs and probe tip materials.

  9. RF characterization and testing of ridge waveguide transitions for RF power couplers

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh; Jose, Mentes; Singh, G. N.; Kumar, Girish; Bhagwat, P. V.

    2016-12-01

    RF characterization of rectangular to ridge waveguide transitions for RF power couplers has been carried out by connecting them back to back. Rectangular waveguide to N type adapters are first calibrated by TRL method and then used for RF measurements. Detailed information is obtained about their RF behavior by measurements and full wave simulations. It is shown that the two transitions can be characterized and tuned for required return loss at design frequency of 352.2 MHz. This opens the possibility of testing and conditioning two transitions together on a test bench. Finally, a RF coupler based on these transitions is coupled to an accelerator cavity. The power coupler is successfully tested up to 200 kW, 352.2 MHz with 0.2% duty cycle.

  10. On the Importance of Symmetrizing RF Coupler Fields for Low Emittance Beams

    SciTech Connect

    Li, Zenghai; Zhou, Feng; Vlieks, Arnold; Adolphsen, Chris; /SLAC

    2011-06-23

    The input power of accelerator structure is normally fed through a coupling slot(s) on the outer wall of the accelerator structure via magnetic coupling. While providing perfect matching, the coupling slots may produce non-axial-symmetric fields in the coupler cell that can induce emittance growth as the beam is accelerated in such a field. This effect is especially important for low emittance beams at low energies such as in the injector accelerators for light sources. In this paper, we present studies of multipole fields of different rf coupler designs and their effect on beam emittance for an X-band photocathode gun being jointly designed with LLNL, and X-band accelerator structures. We will present symmetrized rf coupler designs for these components to preserve the beam emittance.

  11. High performance computing and communications: Advancing the frontiers of information technology

    SciTech Connect

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  12. 1 x N star coupler as a distributed fiber-optic strain sensor in a white-light interferometer.

    PubMed

    Yuan, L; Zhou, L

    1998-07-01

    A novel technique of using a 1 x N star fiber optic coupler as a distributed strain sensor in a white-light interferometer to measure the distribution of strain is presented. The measuring principle and 1 x 4 star coupler with four fiber optic strain sensors are demonstrated. The experiment is performed with four sensors attached to a combination plastic specimen.

  13. Realization of fiber-based OCT system with broadband photonic crystal fiber coupler

    NASA Astrophysics Data System (ADS)

    Ryu, Seon Young; Na, Jihoon; Choi, Hae Young; Choi, Woo Jun; Lee, Byeong Ha; Yang, Gil-Ho

    2006-02-01

    We implemented a fiber-based optical coherence tomography (OCT) system by using a photonic crystal fiber (PCF) coupler which could support an ultra-wideband spectral bandwidth. The PCF coupler fabricated by the fused biconical tapered (FBT) method showed rather flat coupling efficiency over a broad spectral bandwidth. Furthermore, the mode-field shapes at the output ports of the PCF coupler showed single mode characteristic over a wideband range. These features will enable the OCT system to operate at 1300 nm as well as at 800 nm without changing the coupler. The FWHM of the interferogram was measured to be about 3 um when a white-light source was used. While a Ti:Sapphire laser and a conventional superluminescent diode (SLD) produced interferograms with FWHMs of about 4 um and 15 um, respectively. The OCT imaging performance of the PCF-based OCT system was demonstrated by imaging an in vitro rat eye and Misgurnus mizolepis skin with a SLD source at 1300 nm and by imaging a tooth with a Ti:Sapphire laser source at 800 nm. The PCF coupler might enable the utilization of an ultra-wideband supercontinuum generated light source in fiber-optic OCT systems for obtaining high resolution, and also realization of a white-light source as a cost effective solution for fiber-based high-resolution OCT systems. Further, this coupler also can operate as single mode not only near 1000 nm but also near 500 nm wavelengths. This feature may support realization of fiber based second harmonic (SH) OCT system.

  14. A fused side-pumping optical fiber coupler based on twisting

    NASA Astrophysics Data System (ADS)

    Yi, Bokai; Chang, Xinzu; Zhou, Xuanfeng; Chen, Zilun; Zhao, Guomin

    2014-12-01

    Pumping coupler technology is one of the critical technologies for high power laser and amplifier. Side-pumping technology can couple pumping beam into inner cladding of the double-clad fiber through the side of the fiber. Compared to the end-pumping technology by tapered fused bundle (TFB), it has many superiorities. That the signal fiber was not disconnected guarantees high transmission efficiency, providing the possibility of transmitting a high power signal. Additionally, the pump light is coupled into the double-cladding fiber all along the coupler's body (~5-10 cm long), which reduces the thermal effects caused by leakage of pumping light, resulting in high pump power handling capabilities. For the realization of reliable, rugged and efficient high power fiber amplifiers and fiber laser systems, a novel kind of fused side-pumping coupler based on twisting is developed. The complete simulations were carried out for the process of side-pumping. From detailed information about simulations, we found that the pump efficiencies, one of the vital parameters of pumping coupler, have a significant influence with coupling length, the numerical aperture (NA) and taper ratio of pump fiber. However, the diversification of the parameters drops the high transmission efficiency barely. Optimized the parameters in the simulations, the pump and signal coupling efficiencies are 97.3% and 99.4%, respectively. Based on theoretical analysis, the side-pumping coupler was demonstrated at the pump and signal coupling efficiencies are 91.2% and 98.4%, respectively. This fiber coupler can be implemented in almost any fiber laser or amplifier architecture.

  15. DOE research in utilization of high-performance computers

    SciTech Connect

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure.

  16. Coherent tunneling by adiabatic process in a four-waveguide optical coupler

    NASA Astrophysics Data System (ADS)

    Shi, Jian; Ma, Rui-Qiong; Duan, Zuo-Liang; Liang, Meng; Zhang, Wen-wen; Dong, Jun

    2016-07-01

    We numerically simulate Schrödinger-like paraxial wave equation of a four-waveguide system. The coherent tunneling by adiabatic passage in a four-waveguide optical coupler is analyzed by borrowing the dressed state theory of coherent atom system. We discuss the optical coupling mechanism and coupling efficiency of light energy in both intuitive and counterintuitive tunneling schemes and analyze the threshold condition from adiabatic to non-adiabatic regimes in intuitive scheme. The results show that this coupler can be used as power splitter under certain conditions.

  17. Effects of normal mode loss in dielectric waveguide directional couplers and interferometers

    NASA Astrophysics Data System (ADS)

    Youngquist, R. C.; Stokes, L. F.; Shaw, H. J.

    1983-12-01

    Theoretical arguments and experimental evidence are presented to show that the two fundamental normal modes of a coupled waveguide structure have different attenuations in traversing such a structure. The effects of this phenomenon on evanescent wave directional counters and interferometers are derived. Parasitic effects in Mach-Zehnder and Sagnac interferometers utilizing directional couplers are described. An asymmetric output for the recently demonstrated all-single-mode fiber resonator is predicted and compared to experimental results. Some qualitative results are presented for integrated optic directional coupler switches.

  18. Design of low-dispersion output coupler for Cr:LiSAF lasers

    NASA Astrophysics Data System (ADS)

    Liao, Chunyan; Qin, Junjun; Zhu, Xiuhong

    2015-02-01

    An designed output coupler used for the dispersion compensation in Cr:LiSAF femtosecond lasers is reported. It is composed of 50 alternating Ta2O5 and SiO2 layers whose thicknesses are obtained by computer optimization to provide low transmittance and as little as possible group delay dispersion. The optimized output coupler has continuous low transmittance of 1% and group delay dispersion of 0 +/-6fs2 from 750nm to 900nm, which can meet the need of dispersion compensation in Cr:LiSAF lasers.

  19. RF Conditioning and testing of fundamental power couplers for the RIA project

    SciTech Connect

    M. Stirbet; J. Popielarski; T. L. Grimm; M. Johnson

    2003-09-01

    The Rare Isotope Accelerator (RIA) is the highest priority of the nuclear physics community in the United States for a major new accelerator facility. A principal element of RIA will be a superconducting 1.4 GeV superconducting ion linac accelerating ions of isotopes from hydrogen to uranium onto production targets or for further acceleration by a second superconducting linac. The superconducting linac technology is closely related to that used at existing accelerators and the Spallation Neutron Source. Taking advantage of JLAB's SRF Institute facilities and expertise for the SNS project, preparation of couplers, RF conditioning and high power tests have been performed on fundamental power couplers for RIA project.

  20. Testing Procedures and Results of the Prototype Fundamental Power Coupler for the Spallation Neutron Source

    SciTech Connect

    Stirbet, M; Campisi, I E; Daly, E F; Davis, G K; Drury, M; Kneisel, P; Myneni, G; Powers, T; Schneider, W J; Wilson, K M; Kang, Y; Cummings, K A; Hardek, T

    2001-06-01

    High-power RF testing with peak power in excess of 500 kW has been performed on prototype Fundamental Power Couplers (FPC) for the Spallation Neutron Source superconducting (SNS) cavities. The testing followed the development of procedures for cleaning, assembling and preparing the FPC for installation in the test stand. The qualification of the couplers has occurred for the time being only in a limited set of conditions (travelling wave, 20 pps) as the available RF system and control instrumentation are under improvement.

  1. Femtosecond laser fabrication of birefringent directional couplers as polarization beam splitters in fused silica.

    PubMed

    Fernandes, Luís A; Grenier, Jason R; Herman, Peter R; Aitchison, J Stewart; Marques, Paulo V S

    2011-06-20

    Integrated polarization beam splitters based on birefringent directional couplers are demonstrated. The devices are fabricated in bulk fused silica glass by femtosecond laser writing (300 fs, 150 nJ at 500 kHz, 522 nm). The birefringence was measured from the spectral splitting of the Bragg grating resonances associated with the vertically and horizontally polarized modes. Polarization splitting directional couplers were designed and demonstrated with 0.5 dB/cm propagation losses and -19 dB and -24 dB extinction ratios for the polarization splitting.

  2. Ultralow loss, high Q, four port resonant couplers for quantum optics and photonics.

    PubMed

    Rokhsari, H; Vahala, K J

    2004-06-25

    We demonstrate a low-loss, optical four port resonant coupler (add-drop geometry), using ultrahigh Q (>10(8)) toroidal microcavities. Different regimes of operation are investigated by variation of coupling between resonator and fiber taper waveguides. As a result, waveguide-to-waveguide power transfer efficiency of 93% (0.3 dB loss) and nonresonant insertion loss of 0.02% (<0.001 dB) for narrow bandwidth (57 MHz) four port couplers are achieved in this work. The combination of low-loss, fiber compatibility, and wafer-scale design would be suitable for a variety of applications ranging from quantum optics to photonic networks.

  3. Gain characteristics of quantum dot fiber amplifier based on asymmetric tapered fiber coupler

    NASA Astrophysics Data System (ADS)

    Guo, Hairun; Pang, Fufei; Zeng, Xianglong; Wang, Tingyun

    2013-03-01

    We theoretically analyzed the gain characteristics of an integrated semiconductor quantum dot (QD) fiber amplifier (SQDFA) by using a 2 × 2 tapered fiber coupler with a PbS QD-coated layer. The asymmetric structure of the fiber coupler is designed to have a maximum working bandwidth around 1550-nm band and provide a desired optical power ratio of the output signals. By using 600 mW of 980-nm pump, 10 dB gain of a 1550-nm signal is estimated with the gain efficiency of 4.5 dB/cm.

  4. Parallel Information Processing.

    ERIC Educational Resources Information Center

    Rasmussen, Edie M.

    1992-01-01

    Examines parallel computer architecture and the use of parallel processors for text. Topics discussed include parallel algorithms; performance evaluation; parallel information processing; parallel access methods for text; parallel and distributed information retrieval systems; parallel hardware for text; and network models for information…

  5. BioGraphE: high-performance bionetwork analysis using the Biological Graph Environment

    PubMed Central

    Chin, George; Chavarria, Daniel G; Nakamura, Grant C; Sofia, Heidi J

    2008-01-01

    Background Graphs and networks are common analysis representations for biological systems. Many traditional graph algorithms such as k-clique, k-coloring, and subgraph matching have great potential as analysis techniques for newly available data in biology. Yet, as the amount of genomic and bionetwork information rapidly grows, scientists need advanced new computational strategies and tools for dealing with the complexities of the bionetwork analysis and the volume of the data. Results We introduce a computational framework for graph analysis called the Biological Graph Environment (BioGraphE), which provides a general, scalable integration platform for connecting graph problems in biology to optimized computational solvers and high-performance systems. This framework enables biology researchers and computational scientists to identify and deploy network analysis applications and to easily connect them to efficient and powerful computational software and hardware that are specifically designed and tuned to solve complex graph problems. In our particular application of BioGraphE to support network analysis in genome biology, we investigate the use of a Boolean satisfiability solver known as Survey Propagation as a core computational solver executing on standard high-performance parallel systems, as well as multi-threaded architectures. Conclusion In our application of BioGraphE to conduct bionetwork analysis of homology networks, we found that BioGraphE and a custom, parallel implementation of the Survey Propagation SAT solver were capable of solving very large bionetwork problems at high rates of execution on different high-performance computing platforms. PMID:18541059

  6. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  7. Training High Performance Skills: Fallacies and Guidelines. Final Report.

    ERIC Educational Resources Information Center

    Schneider, Walter

    High performance skills are defined as ones: (1) which require over 100 hours of training, (2) in which a substantial number of individuals fail to develop proficiency, and (3) in which the performance of the expert is qualitatively different from that of the novice. Training programs for developing high performance skills are often based on…

  8. Rotordynamic Instability Problems in High-Performance Turbomachinery

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Rotordynamics and predictions on the stability of characteristics of high performance turbomachinery were discussed. Resolutions of problems on experimental validation of the forces that influence rotordynamics were emphasized. The programs to predict or measure forces and force coefficients in high-performance turbomachinery are illustrated. Data to design new machines with enhanced stability characteristics or upgrading existing machines are presented.

  9. Turning High-Poverty Schools into High-Performing Schools

    ERIC Educational Resources Information Center

    Parrett, William H.; Budge, Kathleen

    2012-01-01

    If some schools can overcome the powerful and pervasive effects of poverty to become high performing, shouldn't any school be able to do the same? Shouldn't we be compelled to learn from those schools? Although schools alone will never systemically eliminate poverty, high-poverty, high-performing (HP/HP) schools take control of what they can to…

  10. An Analysis of a High Performing School District's Culture

    ERIC Educational Resources Information Center

    Corum, Kenneth D.; Schuetz, Todd B.

    2012-01-01

    This report describes a problem based learning project focusing on the cultural elements of a high performing school district. Current literature on school district culture provides numerous cultural elements that are present in high performing school districts. With the current climate in education placing pressure on school districts to perform…

  11. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  12. Simulation, Characterization, and Optimization of Metabolic Models with the High Performance Systems Biology Toolkit

    SciTech Connect

    Lunacek, M.; Nag, A.; Alber, D. M.; Gruchalla, K.; Chang, C. H.; Graf, P. A.

    2011-01-01

    The High Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization components for metabolic modeling and the means to assemble these components into large parallel processing hierarchies suiting a particular simulation and optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be configured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on {approx}1000 processors and accomplished billions of stiff ordinary differential equation simulations. This work opens the door for the systems biology metabolic modeling community to take effective advantage of large scale high performance computing resources for the first time.

  13. Mechanisms of Microwave Loss Tangent in High Performance Dielectric Materials

    NASA Astrophysics Data System (ADS)

    Liu, Lingtao

    The mechanism of loss in high performance microwave dielectrics with complex perovskite structure, including Ba(Zn1/3Ta2/3)O 3, Ba(Cd1/3Ta2/3)O3, ZrTiO4-ZnNb 2O6, Ba(Zn1/3Nb2/3)O3, and BaTi4O9-BaZn2Ti4O11, has been investigated. We studied materials synthesized in our own lab and from commercial vendors. Then the measured loss tangent was correlated to the optical, structural, and electrical properties of the material. To accurately and quantitatively determine the microwave loss and Electron Paramagnetic Resonance (EPR) spectra as a function of temperature and magnetic field, we developed parallel plate resonator (PPR) and dielectric resonator (DR) techniques. Our studies found a marked increase in the loss at low temperatures is found in materials containing transition metal with unpaired d-electrons as a result of resonant spin excitations in isolated atoms (light doping) or exchange coupled clusters (moderate to high doping); a mechanism that differs from the usual suspects. The loss tangent can be drastically reduced by applying static magnetic fields. Our measurements also show that this mechanism significantly contributes to room temperature loss, but does not dominate. In order to study the electronic structure of these materials, we grew single crystal thin film dielectrics for spectroscopic studies, including angular resolved photoemission spectroscopy (ARPES) experiment. We have synthesized stoichiometric Ba(Cd1/3Ta2/3)O3 [BCT] (100) dielectric thin films on MgO (100) substrates using Pulsed Laser Deposition. Over 99% of the BCT film was found to be epitaxial when grown with an elevated substrate temperature of 635 °C, an enhanced oxygen pressures of 53 Pa and a Cd-enriched BCT target with a 1 mol BCT: 1.5 mol CdO composition. Analysis of ultra violet optical absorption results indicate that BCT has a bandgap of 4.9 eV.

  14. Progress towards high-performance, steady-state spherical torus

    SciTech Connect

    Ono, M.; Bell, M. G.; Bell, R. E.; Bigelow, T.; Bitter, M.; Blanchard, W.; Boedo, J.; Bourdelle, C.; Bush, C.; Choe, W.; Chrzanowski, J.; Darrow, D. S.; Diem, S. J.; Doerner, R.; Efthimion, P. C.; Ferron, J. R.; Fonck, R. J.; Fredrickson, E. D.; Garstka, G. D.; Gates, D A; Gray, T.; Grisham, L. R.; Heidbrink, W.; Hill, K. W.; Hoffman, D.; Jarboe, T. R.; Johnson, D. W.; Kaita, R.; Kaye, S. M.; Kessel, C.; Kim, J. H.; Kissick, M. W.; Kubota, S.; Kugel, H. W.; LeBlanc, B. P.; Lee, K.; Lee, S. G.; Lewicki, B. T.; Luckhardt, S.; Maingi, R.; Majeski, R.; Manickam, J.; Maqueda, R.; Mau, T. K.; Mazzucato, E.; Medley, S. S.; Menard, J.; Mueller, D.; Nelson, B. A.; Neumeyer, C.; Nishino, N.; Ostrander, C. N.; Pacella, D.; Paoletti, F.; Park, H. K.; Park, W.; Paul, S. F.; Peng, Y-K M.; Phillips, C. K.; Pinsker, R.; Probert, P. H.; Ramakrishnan, S.; Raman, R.; Redi, M.; Roquemore, A. L.; Rosenberg, A.; Ryan, P. M.; Sabbagh, S. A.; Schaffer, M.; Schooff, R. J.; Seraydarian, R.; Skinner, C. H.; Sontag, A. C.; Soukhanovskii, V.; Spaleta, J.; Stevenson, T.; Stutman, D.; Swain, D. W.; Synakowski, E.; Takase, Y.; Tang, X.; Taylor, G.; Timberlake, J.; Tritz, K. L.; Unterberg, E. A.; Halle, A. Von.; Wilgen, J.; Williams, M.; Wilson, J. R.; Xu, X.; Zweben, S. J.; Akers, R.; Barry, R. E.; Beiersdorfer, P.; Bialek, J. M.; Blagojevic, B.; Bonoli, P. T.; Carter, M. D.; Davis, W.; Deng, B.; Dudek, L.; Egedal, J.; Ellis, R.; Finkenthal, M.; Foley, J.; Fredd, E.; Glasser, A.; Gibney, T.; Gilmore, M.; Goldston, R. J.; Hatcher, R. E.; Hawryluk, R. J.; Houlberg, W.; Harvey, R.; Jardin, S. C.; Hosea, J. C.; Ji, H.; Kalish, M.; Lowrance, J.; Lao, L. L.; Levinton, F. M.; Luhmann, N. C.; Marsala, R.; Mastravito, D.; Menon, M. M.; Mitarai, O.; Nagata, M.; Oliaro, G.; Parsells, R.; Peebles, T.; Peneflor, B.; Piglowski, D.; Porter, G. D.; Ram, A. K.; Rensink, M.; Rewoldt, G.; Robinson, J.; Roney, P.; Shaing, K.; Shiraiwa, S.; Sichta, P.; Stotler, D.; Stratton, B. C.; Vero, R.; Wampler, W. R.; Wurden, G. A.

    2003-12-01

    Research on the spherical torus (or spherical tokamak) (ST) is being pursued to explore the scientific benefits of modifying the field line structure from that in more moderate aspect ratio devices, such as the conventional tokamak. The ST experiments are being conducted in various US research facilities including the MA-class National Spherical Torus Experiment (NSTX) at Princeton, and three medium sized ST research facilities: PEGASUS at University of Wisconsin, HIT-II at University of Washington, and CDX-U at Princeton. In the context of the fusion energy development path being formulated in the US, an ST-based Component Test Facility (CTF) and, ultimately a Demo device, are being discussed. For these, it is essential to develop high performance, steady-state operational scenarios. The relevant scientific issues are energy confinement, MHD stability at high beta (β), non-inductive sustainment, Ohmic-solenoid-free start-up, and power and particle handling. In the confinement area, the NSTX experiments have shown that the confinement can be up to 50% better than the ITER-98-pby2 H-mode scaling, consistent with the requirements for an ST-based CTF and Demo. In NSTX, CTF-relevant average toroidal beta values βT of up to 35% with a near unity central βT have been obtained. NSTX will be exploring advanced regimes where βT up to 40% can be sustained through active stabilization of resistive wall modes. To date, the most successful technique for non-inductive sustainment in NSTX is the high beta poloidal regime, where discharges with a high non-inductive fraction (~ 60% bootstrap current+NBI current drive) were sustained over the resistive skin time. Research on radio-frequency (RF) based heating and current drive utilizing high harmonic fast wave and electron Bernstein wave is also pursued on NSTX, PEGASUS, and CDX-U. For non-inductive start-up, the coaxial helicity injection, developed in HIT/HIT-II, has been adopted on NSTX to

  15. Progress towards high-performance, steady-state spherical torus.

    SciTech Connect

    Lee, S.G; Kugel, W.; Efthimion, P. C.; Kissick, M. W.; Bourdelle, C.; Kim, J.H; Gray, T.; Garstka, G. D.; Fonck, R. J.; Doerner, R.; Diem, S.J.; Pacella, D.; Nishino, N.; Ferron, J. R.; Skinner, C. H.; Stutman, D.; Soukhanovskii, V.; Choe, W.; Chrzanowski, J.; Mau, T.K.; Bell, Michael G.; Raman, R.; Peng, Y-K. M.; Ono, M.; Park, W.; Hoffman, D.; Maqueda, R.; Kaye, S. M.; Kaita, R.; Jarboe, T.R.; Hill, K.W.; Heidbrink, W.; Spaleta, J.; Sontag, A.C; Seraydarian, R.; Schooff, R.J.; Sabbagh, S.A.; Menard, J.; Mazzucato, E.; Lee, K.; LeBlanc, B.; Probert, P. H.; Blanchard, W.; Wampler, William R.; Swain, D. W.; Ryan, P.M.; Rosenberg, A.; Ramakrishnan, S.; Phillips, C.K.; Park, H.K.; Roquemore, A. L.; Paoletti, F.; Medley, S. S.; Fredrickson, E. D.; Kessel, C. E.; Stevenson, T.; Darrow, D. S.; Majeski, R.; Bitter, M.; Neumeyer, C.; Nelson, B.A.; Paul, S. F.; Manickam, J.; Ostrander, C. N.; Mueller, D.; Lewicki, B.T; Luckhardt, S.; Johnson, D.W.; Grisham, L.R.; Kubota, Shigeru; Gates, D.A.; Bush, C.; Synakowski, E.J.; Schaffer, M.; Boedo, J.; Maingi, R.; Redi, M.; Pinsker, R.; Bigelow, T.; Bell, R. E.

    2004-06-01

    Research on the spherical torus (or spherical tokamak) (ST) is being pursued to explore the scientific benefits of modifying the field line structure from that in more moderate aspect ratio devices, such as the conventional tokamak. The ST experiments are being conducted in various US research facilities including the MA-class National Spherical Torus Experiment (NSTX) at Princeton, and three medium sized ST research facilities: PEGASUS at University of Wisconsin, HIT-II at University of Washington, and CDX-U at Princeton. In the context of the fusion energy development path being formulated in the US, an ST-based Component Test Facility (CTF) and, ultimately a Demo device, are being discussed. For these, it is essential to develop high performance, steady-state operational scenarios. The relevant scientific issues are energy confinement, MHD stability at high beta ({beta}), non-inductive sustainment, Ohmic-solenoid-free start-up, and power and particle handling. In the confinement area, the NSTX experiments have shown that the confinement can be up to 50% better than the ITER-98-pby2 H-mode scaling, consistent with the requirements for an ST-based CTF and Demo. In NSTX, CTF-relevant average toroidal beta values {beta}{sub T} of up to 35% with a near unity central {beta}{sub T} have been obtained. NSTX will be exploring advanced regimes where {beta}{sub T} up to 40% can be sustained through active stabilization of resistive wall modes. To date, the most successful technique for non-inductive sustainment in NSTX is the high beta poloidal regime, where discharges with a high non-inductive fraction ({approx}60% bootstrap current+NBI current drive) were sustained over the resistive skin time. Research on radio-frequency (RF) based heating and current drive utilizing high harmonic fast wave and electron Bernstein wave is also pursued on NSTX, PEGASUS, and CDX-U. For non-inductive start-up, the coaxial helicity injection, developed in HIT/HIT-II, has been adopted on NSTX

  16. Progress Towards High Performance, Steady-state Spherical Torus

    SciTech Connect

    M. Ono; M.G. Bell; R.E. Bell; T. Bigelow; M. Bitter; W. Blanchard; J. Boedo; C. Bourdelle; C. Bush; W. Choe; J. Chrzanowski; D.S. Darrow; S.J. Diem; R. Doerner; P.C. Efthimion; J.R. Ferron; R.J. Fonck; E.D. Fredrickson; G.D. Garstka; D.A. Gates; T. Gray; L.R. Grisham; W. Heidbrink; K.W. Hill; D. Hoffman; T.R. Jarboe; D.W. Johnson; R. Kaita; S.M. Kaye; C. Kessel; J.H. Kim; M.W. Kissick; S. Kubota; H.W. Kugel; B.P. LeBlanc; K. Lee; S.G. Lee; B.T. Lewicki; S. Luckhardt; R. Maingi; R. Majeski; J. Manickam; R. Maqueda; T.K. Mau; E. Mazzucato; S.S. Medley; J. Menard; D. Mueller; B.A. Nelson; C. Neumeyer; N. Nishino; C.N. Ostrander; D. Pacella; F. Paoletti; H.K. Park; W. Park; S.F. Paul; Y.-K. M. Peng; C.K. Phillips; R. Pinsker; P.H. Probert; S. Ramakrishnan; R. Raman; M. Redi; A.L. Roquemore; A. Rosenberg; P.M. Ryan; S.A. Sabbagh; M. Schaffer; R.J. Schooff; R. Seraydarian; C.H. Skinner; A.C. Sontag; V. Soukhanovskii; J. Spaleta; T. Stevenson; D. Stutman; D.W. Swain; E. Synakowski; Y. Takase; X. Tang; G. Taylor; J. Timberlake; K.L. Tritz; E.A. Unterberg; A. Von Halle; J. Wilgen; M. Williams; J.R. Wilson; X. Xu; S.J. Zweben; R. Akers; R.E. Barry; P. Beiersdorfer; J.M. Bialek; B. Blagojevic; P.T. Bonoli; M.D. Carter; W. Davis; B. Deng; L. Dudek; J. Egedal; R. Ellis; M. Finkenthal; J. Foley; E. Fredd; A. Glasser; T. Gibney; M. Gilmore; R.J. Goldston; R.E. Hatcher; R.J. Hawryluk; W. Houlberg; R. Harvey; S.C. Jardin; J.C. Hosea; H. Ji; M. Kalish; J. Lowrance; L.L. Lao; F.M. Levinton; N.C. Luhmann; R. Marsala; D. Mastravito; M.M. Menon; O. Mitarai; M. Nagata; G. Oliaro; R. Parsells; T. Peebles; B. Peneflor; D. Piglowski; G.D. Porter; A.K. Ram; M. Rensink; G. Rewoldt; P. Roney; K. Shaing; S. Shiraiwa; P. Sichta; D. Stotler; B.C. Stratton; R. Vero; W.R. Wampler; G.A. Wurden

    2003-10-02

    Research on the Spherical Torus (or Spherical Tokamak) is being pursued to explore the scientific benefits of modifying the field line structure from that in more moderate aspect-ratio devices, such as the conventional tokamak. The Spherical Tours (ST) experiments are being conducted in various U.S. research facilities including the MA-class National Spherical Torus Experiment (NSTX) at Princeton, and three medium-size ST research facilities: Pegasus at University of Wisconsin, HIT-II at University of Washington, and CDX-U at Princeton. In the context of the fusion energy development path being formulated in the U.S., an ST-based Component Test Facility (CTF) and, ultimately a Demo device, are being discussed. For these, it is essential to develop high-performance, steady-state operational scenarios. The relevant scientific issues are energy confinement, MHD stability at high beta (B), noninductive sustainment, ohmic-solenoid-free start-up, and power and particle handling. In the confinement area, the NSTX experiments have shown that the confinement can be up to 50% better than the ITER-98-pby2 H-mode scaling, consistent with the requirements for an ST-based CTF and Demo. In NSTX, CTF-relevant average toroidal beta values bT of up to 35% with the near unity central betaT have been obtained. NSTX will be exploring advanced regimes where bT up to 40% can be sustained through active stabilization of resistive wall modes. To date, the most successful technique for noninductive sustainment in NSTX is the high beta-poloidal regime, where discharges with a high noninductive fraction ({approx}60% bootstrap current + neutral-beam-injected current drive) were sustained over the resistive skin time. Research on radio-frequency-based heating and current drive utilizing HHFW (High Harmonic Fast Wave) and EBW (Electron Bernstein Wave) is also pursued on NSTX, Pegasus, and CDX-U. For noninductive start-up, the Coaxial Helicity Injection (CHI), developed in HIT/HIT-II, has been

  17. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  18. Real-time Tsunami Inundation Prediction Using High Performance Computers

    NASA Astrophysics Data System (ADS)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  19. High-performance workplace practices in nursing homes: an economic perspective.

    PubMed

    Bishop, Christine E

    2014-02-01

    To develop implications for research, practice and policy, selected economics and human resources management research literature was reviewed to compare and contrast nursing home culture change work practices with high-performance human resource management systems in other industries. The organization of nursing home work under culture change has much in common with high-performance work systems, which are characterized by increased autonomy for front-line workers, self-managed teams, flattened supervisory hierarchy, and the aspiration that workers use specific knowledge gained on the job to enhance quality and customization. However, successful high-performance work systems also entail intensive recruitment, screening, and on-going training of workers, and compensation that supports selective hiring and worker commitment; these features are not usual in the nursing home sector. Thus despite many parallels with high-performance work systems, culture change work systems are missing essential elements: those that require higher compensation. If purchasers, including public payers, were willing to pay for customized, resident-centered care, productivity gains could be shared with workers, and the nursing home sector could move from a low-road to a high-road employment system.

  20. Element for use in an inductive coupler for downhole drilling components

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Pixton, David S.; Dahlgren, Scott; Fox, Joe; Sneddon, Cameron

    2006-08-29

    The present invention includes an element for use in an inductive coupler in a downhole component. The element includes a plurality of ductile, generally U-shaped leaves that are electrically conductive. The leaves are less than about 0.0625" thick and are separated by an electrically insulating material. These leaves are aligned so as to form a generally circular trough. The invention also includes an inductive coupler for use in downhole components, the inductive coupler including an annular housing having a recess with a magnetically conductive, electrically insulating (MCEI) element disposed in the recess. The MCEI element includes a plurality of segments where each segment further includes a plurality of ductile, generally U-shaped electrically conductive leaves. Each leaf is less than about 0.0625" thick and separated from the otherwise adjacent leaves by electrically insulating material. The segments and leaves are aligned so as to form a generally circular trough. The inductive coupler further includes an insulated conductor disposed within the generally circular trough. A polymer fills spaces between otherwise adjacent segments, the annular housing, insulated conductor, and further fills the circular trough.