Science.gov

Sample records for highly scalable udp-based

  1. Highly Scalable, UDP-Based Network Transport Protocols for Lambda Grids and 10 GE Routed Networks

    SciTech Connect

    PI: Robert Grossman Co-PI: Stephen Eick

    2009-08-04

    Summary of Report In work prior to this grant, NCDM developed a high performance data transport protocol called SABUL. During this grant, we refined SABUL’s functionality, and then extended both the capabilities and functionality and incorporated them into a new protocol called UDP-based Data transport Protocol, or UDT. We also began preliminary work on Composable UDT, a version of UDT that allows the user to choose among different congestion control algorithms and implement the algorithm of his choice at the time he compiles the code. Specifically, we: · Investigated the theoretical foundations of protocols similar to SABUL and UDT. · Performed design and development work of UDT, a protocol that uses UDP in both the data and control channels. · Began design and development work of Composable UDT, a protocol that supports the use of different congestion control algorithms by simply including the appropriate library when compiling the code. · Performed experimental studies using UDT and Composable UDT using real world applications such as the Sloan Digital Sky Survey (SDSS) astronomical data sets. · Released several versions of UDT and Composable, the most recent being v3.1.

  2. Highly scalable coherent fiber combining

    NASA Astrophysics Data System (ADS)

    Antier, M.; Bourderionnet, J.; Larat, C.; Lallier, E.; Brignon, A.

    2015-10-01

    An architecture for active coherent fiber laser beam combining using an interferometric measurement is demonstrated. This technique allows measuring the exact phase errors of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Our experimental system, composed of 16 independent fiber channels, is used to evaluate the achieved phase locking stability in terms of phase shift error and bandwidth. We show that only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20 rms, which demonstrates the scalability of this concept. Furthermore we propose a beam shaping technique to increase the combining efficiency.

  3. High Scalability Video ISR Exploitation

    DTIC Science & Technology

    2012-10-01

    Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown

  4. Scalable resource management in high performance computers.

    SciTech Connect

    Frachtenberg, E.; Petrini, F.; Fernandez Peinador, J.; Coll, S.

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  5. A Scalable High-Throughput Chemical Synthesizer

    PubMed Central

    Livesay, Eric A.; Liu, Ying-Horng; Luebke, Kevin J.; Irick, Joel; Belosludtsev, Yuri; Rayner, Simon; Balog, Robert; Johnston, Stephen Albert

    2002-01-01

    A machine that employs a novel reagent delivery technique for biomolecular synthesis has been developed. This machine separates the addressing of individual synthesis sites from the actual process of reagent delivery by using masks placed over the sites. Because of this separation, this machine is both cost-effective and scalable, and thus the time required to synthesize 384 or 1536 unique biomolecules is very nearly the same. Importantly, the mask design allows scaling of the number of synthesis sites without the addition of new valving. Physical and biological comparisons between DNA made on a commercially available synthesizer and this unit show that it produces DNA of similar quality. PMID:12466300

  6. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    NASA Technical Reports Server (NTRS)

    Christensen, Daniel; Das, Santanu; Srivastava, Ashok N.

    2009-01-01

    Matching Pursuit Decomposition (MPD) is a powerful iterative algorithm for signal decomposition and feature extraction. MPD decomposes any signal into linear combinations of its dictionary elements or atoms . A best fit atom from an arbitrarily defined dictionary is determined through cross-correlation. The selected atom is subtracted from the signal and this procedure is repeated on the residual in the subsequent iterations until a stopping criterion is met. The reconstructed signal reveals the waveform structure of the original signal. However, a sufficiently large dictionary is required for an accurate reconstruction; this in return increases the computational burden of the algorithm, thus limiting its applicability and level of adoption. The purpose of this research is to improve the scalability and performance of the classical MPD algorithm. Correlation thresholds were defined to prune insignificant atoms from the dictionary. The Coarse-Fine Grids and Multiple Atom Extraction techniques were proposed to decrease the computational burden of the algorithm. The Coarse-Fine Grids method enabled the approximation and refinement of the parameters for the best fit atom. The ability to extract multiple atoms within a single iteration enhanced the effectiveness and efficiency of each iteration. These improvements were implemented to produce an improved Matching Pursuit Decomposition algorithm entitled MPD++. Disparate signal decomposition applications may require a particular emphasis of accuracy or computational efficiency. The prominence of the key signal features required for the proper signal classification dictates the level of accuracy necessary in the decomposition. The MPD++ algorithm may be easily adapted to accommodate the imposed requirements. Certain feature extraction applications may require rapid signal decomposition. The full potential of MPD++ may be utilized to produce incredible performance gains while extracting only slightly less energy than the

  7. Scalable photonic crystal chips for high sensitivity protein detection.

    PubMed

    Liang, Feng; Clarke, Nigel; Patel, Parth; Loncar, Marko; Quan, Qimin

    2013-12-30

    Scalable microfabrication technology has enabled semiconductor and microelectronics industries, among other fields. Meanwhile, rapid and sensitive bio-molecule detection is increasingly important for drug discovery and biomedical diagnostics. In this work, we designed and demonstrated that photonic crystal sensor chips have high sensitivity for protein detection and can be mass-produced with scalable deep-UV lithography. We demonstrated label-free detection of carcinoembryonic antigen from pg/mL to μg/mL, with high quality factor photonic crystal nanobeam cavities.

  8. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    SciTech Connect

    Masalma, Yahya; Jiao, Yu

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  9. Low power, scalable multichannel high voltage controller

    DOEpatents

    Stamps, James Frederick; Crocker, Robert Ward; Yee, Daniel Dadwa; Dils, David Wright

    2006-03-14

    A low voltage control circuit is provided for individually controlling high voltage power provided over bus lines to a multitude of interconnected loads. An example of a load is a drive for capillary channels in a microfluidic system. Control is distributed from a central high voltage circuit, rather than using a number of large expensive central high voltage circuits to enable reducing circuit size and cost. Voltage is distributed to each individual load and controlled using a number of high voltage controller channel switches connected to high voltage bus lines. The channel switches each include complementary pull up and pull down photo isolator relays with photo isolator switching controlled from the central high voltage circuit to provide a desired bus line voltage. Switching of the photo isolator relays is further controlled in each channel switch using feedback from a resistor divider circuit to maintain the bus voltage swing within desired limits. Current sensing is provided using a switched resistive load in each channel switch, with switching of the resistive loads controlled from the central high voltage circuit.

  10. Low power, scalable multichannel high voltage controller

    DOEpatents

    Stamps, James Frederick; Crocker, Robert Ward; Yee, Daniel Dadwa; Dils, David Wright

    2008-03-25

    A low voltage control circuit is provided for individually controlling high voltage power provided over bus lines to a multitude of interconnected loads. An example of a load is a drive for capillary channels in a microfluidic system. Control is distributed from a central high voltage circuit, rather than using a number of large expensive central high voltage circuits to enable reducing circuit size and cost. Voltage is distributed to each individual load and controlled using a number of high voltage controller channel switches connected to high voltage bus lines. The channel switches each include complementary pull up and pull down photo isolator relays with photo isolator switching controlled from the central high voltage circuit to provide a desired bus line voltage. Switching of the photo isolator relays is further controlled in each channel switch using feedback from a resistor divider circuit to maintain the bus voltage swing within desired limits. Current sensing is provided using a switched resistive load in each channel switch, with switching of the resistive loads controlled from the central high voltage circuit.

  11. Scalable high-power optically pumped GaAs laser

    NASA Astrophysics Data System (ADS)

    Le, H. Q.; di Cecca, S.; Mooradian, A.

    1991-05-01

    The use of disk geometry, optically pumped semiconductor gain elements for high-power scalability and good transverse mode quality has been studied. A room-temperature TEM00 transverse mode, external-cavity GaAs disk laser has been demonstrated with 500 W peak-power output and 40-percent slope efficiency, when pumped by a Ti:Al2O3 laser. The conditions for diode laser pumping are shown to be consistent with available power level.

  12. A Highly Scalable Peptide-Based Assay System for Proteomics

    PubMed Central

    Kozlov, Igor A.; Thomsen, Elliot R.; Munchel, Sarah E.; Villegas, Patricia; Capek, Petr; Gower, Austin J.; K. Pond, Stephanie J.; Chudin, Eugene; Chee, Mark S.

    2012-01-01

    We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays. PMID:22701568

  13. Scalable Multiprocessor for High-Speed Computing in Space

    NASA Technical Reports Server (NTRS)

    Lux, James; Lang, Minh; Nishimoto, Kouji; Clark, Douglas; Stosic, Dorothy; Bachmann, Alex; Wilkinson, William; Steffke, Richard

    2004-01-01

    A report discusses the continuing development of a scalable multiprocessor computing system for hard real-time applications aboard a spacecraft. "Hard realtime applications" signifies applications, like real-time radar signal processing, in which the data to be processed are generated at "hundreds" of pulses per second, each pulse "requiring" millions of arithmetic operations. In these applications, the digital processors must be tightly integrated with analog instrumentation (e.g., radar equipment), and data input/output must be synchronized with analog instrumentation, controlled to within fractions of a microsecond. The scalable multiprocessor is a cluster of identical commercial-off-the-shelf generic DSP (digital-signal-processing) computers plus generic interface circuits, including analog-to-digital converters, all controlled by software. The processors are computers interconnected by high-speed serial links. Performance can be increased by adding hardware modules and correspondingly modifying the software. Work is distributed among the processors in a parallel or pipeline fashion by means of a flexible master/slave control and timing scheme. Each processor operates under its own local clock; synchronization is achieved by broadcasting master time signals to all the processors, which compute offsets between the master clock and their local clocks.

  14. High-performance, scalable optical network-on-chip architectures

    NASA Astrophysics Data System (ADS)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  15. A scalable approach for high throughput branch flow filtration.

    PubMed

    Inglis, David W; Herman, Nick

    2013-05-07

    Microfluidic continuous flow filtration methods have the potential for very high size resolution using minimum feature sizes that are larger than the separation size, thereby circumventing the problem of clogging. Branch flow filtration is particularly promising because it has an unlimited dynamic range (ratio of largest passable particle to the smallest separated particle) but suffers from very poor volume throughput because when many branches are used, they cannot be identical if each is to have the same size cut-off. We describe a new iterative approach to the design of branch filtration devices able to overcome this limitation without large dead volumes. This is demonstrated by numerical modelling, fabrication and testing of devices with 20 branches, with dynamic ranges up to 6.9, and high filtration ratios (14-29%) on beads and fungal spores. The filters have a sharp size cutoff (10× depletion for 12% size difference), with large particle rejection equivalent to a 20th order Butterworth low pass filter. The devices are fully scalable, enabling higher throughput and smaller cutoff sizes and they are compatible with ultra low cost fabrication.

  16. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    SciTech Connect

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  17. Providing scalable system software for high-end simulations

    SciTech Connect

    Greenberg, D.

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  18. Scalable van der Waals Heterojunctions for High-Performance Photodetectors.

    PubMed

    Yeh, Chao-Hui; Liang, Zheng-Yong; Lin, Yung-Chang; Wu, Tien-Lin; Fan, Ta; Chu, Yu-Cheng; Ma, Chun-Hao; Liu, Yu-Chen; Chu, Ying-Hao; Suenaga, Kazutomo; Chiu, Po-Wen

    2017-10-05

    Atomically thin two-dimensional (2D) materials have attracted increasing attention for optoelectronic applications in view of their compact, ultrathin, flexible, and superior photosensing characteristics. Yet, scalable growth of 2D heterostructures and the fabrication of integrable optoelectronic devices remain unaddressed. Here, we show a scalable formation of 2D stacks and the fabrication of phototransistor arrays, with each photosensing element made of a graphene-WS2 vertical heterojunction and individually addressable by a local top gate. The constituent layers in the heterojunction are grown using chemical vapor deposition in combination with sulfurization, providing a clean junction interface and processing scalability. The aluminum top gate possesses a self-limiting oxide around the gate structure, allowing for a self-aligned deposition of drain/source contacts to reduce the access (ungated) channel regions and to boost the device performance. The generated photocurrent, inherently restricted by the limited optical absorption cross section of 2D materials, can be enhanced by 2 orders of magnitude by top gating. The resulting photoresponsivity can reach 4.0 A/W under an illumination power density of 0.5 mW/cm(2), and the dark current can be minimized to few picoamperes, yielding a low noise-equivalent power of 2.5 × 10(-16) W/Hz(1/2). Tailoring 2D heterostacks as well as the device architecture moves the applications of 2D-based optoelectronic devices one big step forward.

  19. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    PubMed

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  20. Developing highly scalable fluid solvers for enabling multiphysics simulation.

    SciTech Connect

    Clausen, Jonathan R

    2013-03-01

    We performed an investigation into explicit algorithms for the simulation of incompressible flows using methods with a finite, but small amount of compressibility added. Such methods include the artificial compressibility method and the lattice-Boltzmann method. The impetus for investigating such techniques stems from the increasing use of parallel computation at all levels (processors, clusters, and graphics processing units). Explicit algorithms have the potential to leverage these resources. In our investigation, a new form of artificial compressibility was derived. This method, referred to as the Entropically Damped Artificial Compressibility (EDAC) method, demonstrated superior results to traditional artificial compressibility methods by damping the numerical acoustic waves associated with these methods. Performance nearing that of the lattice- Boltzmann technique was observed, without the requirement of recasting the problem in terms of particle distribution functions; continuum variables may be used. Several example problems were investigated using a finite-di erence and finite-element discretizations of the EDAC equations. Example problems included lid-driven cavity flow, a convecting Taylor-Green vortex, a doubly periodic shear layer, freely decaying turbulence, and flow over a square cylinder. Additionally, a scalability study was performed using in excess of one million processing cores. Explicit methods were found to have desirable scaling properties; however, some robustness and general applicability issues remained.

  1. A highly scalable, interoperable clinical decision support service

    PubMed Central

    Goldberg, Howard S; Paterno, Marilyn D; Rocha, Beatriz H; Schaeffer, Molly; Wright, Adam; Erickson, Jessica L; Middleton, Blackford

    2014-01-01

    Objective To create a clinical decision support (CDS) system that is shareable across healthcare delivery systems and settings over large geographic regions. Materials and methods The enterprise clinical rules service (ECRS) realizes nine design principles through a series of enterprise java beans and leverages off-the-shelf rules management systems in order to provide consistent, maintainable, and scalable decision support in a variety of settings. Results The ECRS is deployed at Partners HealthCare System (PHS) and is in use for a series of trials by members of the CDS consortium, including internally developed systems at PHS, the Regenstrief Institute, and vendor-based systems deployed at locations in Oregon and New Jersey. Performance measures indicate that the ECRS provides sub-second response time when measured apart from services required to retrieve data and assemble the continuity of care document used as input. Discussion We consider related work, design decisions, comparisons with emerging national standards, and discuss uses and limitations of the ECRS. Conclusions ECRS design, implementation, and use in CDS consortium trials indicate that it provides the flexibility and modularity needed for broad use and performs adequately. Future work will investigate additional CDS patterns, alternative methods of data passing, and further optimizations in ECRS performance. PMID:23828174

  2. Vertical nanowire electrode array: a highly scalable platform for intracellular interfacing to neuronal circuits

    NASA Astrophysics Data System (ADS)

    Jorgolli, Marsela; Robinson, Jacob; Shalek, Alex; Yoon, Myung-Han; Gertner, Rona; Park, Hongkun

    2012-02-01

    Interrogation of complex neuronal network requires new experimental tools that are sensitive enough to quantify the strengths of synaptic connections, yet scalable enough to couple to a large number of neurons simultaneously. Here, we will present a new, highly scalable intracellular electrode platform based on vertical nanowires that affords parallel interfacing to multiple mammalian neurons. Specifically, we show that our vertical nanowire electrode arrays can intracellularly record and stimulate neuronal activity in dissociated cultures of rat cortical neurons and be used to map multiple individual synaptic connections. This platform's scalability and full compatibility with silicon nanofabrication techniques provide a clear path toward simultaneous high-fidelity interfacing with hundreds of individual neurons, opening up exciting new avenues for neuronal circuit studies and prosthetics.

  3. Scalable collaborative targeted learning for high-dimensional data.

    PubMed

    Ju, Cheng; Gruber, Susan; Lendle, Samuel D; Chambaz, Antoine; Franklin, Jessica M; Wyss, Richard; Schneeweiss, Sebastian; van der Laan, Mark J

    2017-01-01

    seem to indicate that our scalable collaborative targeted minimum loss-based estimation and SL-C-TMLE algorithms work well. All C-TMLEs are publicly available in a Julia software package.

  4. Highly defective graphite for scalable synthesis of nitrogen doped holey graphene with high volumetric capacitance

    NASA Astrophysics Data System (ADS)

    Zhang, Yijie; Ji, Lei; Li, Wanfei; Zhang, Zhao; Lu, Luhua; Zhou, Lisha; Liu, Jinghai; Chen, Ying; Liu, Liwei; Chen, Wei; Zhang, Yuegang

    2016-12-01

    Manipulating basal plane structure of graphene for advanced energy conversion materials design has been research frontier in recent years. By extending size of defects in the basal plane of graphene from atomic scale to nanoscale, graphene with in-plane holes can be synthesized by multiple steps oxidation and reduction of defective graphene oxide at low concentration. These complicated and low yield synthetic methods largely limited research and applications of holey graphene based high performance energy conversion materials. Inspired by graphene in-plane holes formation mechanism, an easy and scalable synthetic approach has been proposed in this work. By oxidizing widely available defective graphite mineral under high concentration, holey graphene oxide has been scalable synthesized. Through simple reduction of holey graphene oxide, nitrogen doped holey graphene with high volumetric capacitance of 439 F/cm3 was obtained. We believe this breakthrough can provide a feasible synthetic approach for further exploring the properties and performance of holey graphene based materials in variety of fields.

  5. Scalable exfoliation process for highly soluble boron nitride nanoplatelets by hydroxide-assisted ball milling.

    PubMed

    Lee, Dongju; Lee, Bin; Park, Kwang Hyun; Ryu, Ho Jin; Jeon, Seokwoo; Hong, Soon Hyung

    2015-02-11

    The scalable preparation of two-dimensional hexagonal boron nitride (h-BN) is essential for practical applications. Despite intense research in this area, high-yield production of two-dimensional h-BN with large-size and high solubility remains a key challenge. In the present work, we propose a scalable exfoliation process for hydroxyl-functionalized BN nanoplatelets (OH-BNNPs) by a simple ball milling of BN powders in the presence of sodium hydroxide via the synergetic effect of chemical peeling and mechanical shear forces. The hydroxide-assisted ball milling process results in relatively large flakes with an average size of 1.5 μm with little damage to the in-plane structure of the OH-BNNP and high yields of 18%. The resultant OH-BNNP samples can be redispersed in various solvents and form stable dispersions that can be used for multiple purposes. The incorporation of the BNNPs into the polyethylene matrix effectively enhanced the barrier properties of the polyethylene due to increased tortuosity of the diffusion path of the gas molecules. Hydroxide-assisted ball milling process can thus provide simple and efficient approaches to scalable preparation of large-size and highly soluble BNNPs. Moreover, this exfoliation process is not only easily scalable but also applicable to other layered materials.

  6. Air-stable ink for scalable, high-throughput layer deposition

    DOEpatents

    Weil, Benjamin D; Connor, Stephen T; Cui, Yi

    2014-02-11

    A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.

  7. Scalable high-power and high-brightness fiber coupled diode laser devices

    NASA Astrophysics Data System (ADS)

    Köhler, Bernd; Ahlert, Sandra; Bayer, Andreas; Kissel, Heiko; Müntz, Holger; Noeske, Axel; Rotter, Karsten; Segref, Armin; Stoiber, Michael; Unger, Andreas; Wolf, Paul; Biesenbach, Jens

    2012-03-01

    The demand for high-power and high-brightness fiber coupled diode laser devices is mainly driven by applications for solid-state laser pumping and materials processing. The ongoing power scaling of fiber lasers requires scalable fibercoupled diode laser devices with increased power and brightness. For applications in materials processing multi-kW output power with beam quality of about 30 mm x mrad is needed. We have developed a modular diode laser concept combining high power, high brightness, wavelength stabilization and optionally low weight, which becomes more and more important for a multitude of applications. In particular the defense technology requires robust but lightweight high-power diode laser sources in combination with high brightness. Heart of the concept is a specially tailored diode laser bar, whose epitaxial and lateral structure is designed such that only standard fast- and slow-axis collimator lenses in combination with appropriate focusing optics are required to couple the beam into a fiber with a core diameter of 200 μm and a numerical aperture (NA) of 0.22. The spectral quality, which is an important issue especially for fiber laser pump sources, is ensured by means of Volume Holographic Gratings (VHG) for wavelength stabilization. In this paper we present a detailed characterization of different diode laser sources based on the scalable modular concept. The optical output power is scaled from 180 W coupled into a 100 μm NA 0.22 fiber up to 1.7 kW coupled into a 400 μm NA 0.22 fiber. In addition we present a lightweight laser unit with an output power of more than 300 W for a 200 μm NA 0.22 fiber with a weight vs. power ratio of only 0.9 kg/kW.

  8. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  9. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  10. Ultralight, scalable, and high-temperature–resilient ceramic nanofiber sponges

    PubMed Central

    Wang, Haolun; Zhang, Xuan; Wang, Ning; Li, Yan; Feng, Xue; Huang, Ya; Zhao, Chunsong; Liu, Zhenglian; Fang, Minghao; Ou, Gang; Gao, Huajian; Li, Xiaoyan; Wu, Hui

    2017-01-01

    Ultralight and resilient porous nanostructures have been fabricated in various material forms, including carbon, polymers, and metals. However, the development of ultralight and high-temperature resilient structures still remains extremely challenging. Ceramics exhibit good mechanical and chemical stability at high temperatures, but their brittleness and sensitivity to flaws significantly complicate the fabrication of resilient porous ceramic nanostructures. We report the manufacturing of large-scale, lightweight, high-temperature resilient, three-dimensional sponges based on a variety of oxide ceramic (for example, TiO2, ZrO2, yttria-stabilized ZrO2, and BaTiO3) nanofibers through an efficient solution blow-spinning process. The ceramic sponges consist of numerous tangled ceramic nanofibers, with densities varying from 8 to 40 mg/cm3. In situ uniaxial compression in a scanning electron microscope showed that the TiO2 nanofiber sponge exhibits high energy absorption (for example, dissipation of up to 29.6 mJ/cm3 in energy density at 50% strain) and recovers rapidly after compression in excess of 20% strain at both room temperature and 400°C. The sponge exhibits excellent resilience with residual strains of only ~1% at 800°C after 10 cycles of 10% compression strain and maintains good recoverability after compression at ~1300°C. We show that ceramic nanofiber sponges can serve multiple functions, such as elasticity-dependent electrical resistance, photocatalytic activity, and thermal insulation. PMID:28630915

  11. LED light engine concept with ultra-high scalable luminance

    NASA Astrophysics Data System (ADS)

    Hoelen, Christoph; de Boer, Dick; Bruls, Dominique; van der Eyden, Joost; Koole, Rolf; Li, Yun; Mirsadeghi, Mo; Vanbroekhoven, Vincent; Van den Bergh, John-John; Van de Voorde, Patrick

    2016-03-01

    Although LEDs have been introduced successfully in many general lighting applications during the past decade, high brightness light source applications are still suffering from the limited luminance of LEDs. High power LEDs are generally limited in luminance to ca 100 Mnit (108 lm/m2sr) or less, while dedicated devices for projection may achieve luminance values up to ca 300 Mnit with phosphor converted green. In particular for high luminous flux applications with limited étendue, like in front projection systems, only very modest luminous flux values in the beam can be achieved with LEDs compared to systems based on discharge lamps. In this paper we introduce a light engine concept based on a light converter rod pumped with blue LEDs that breaks through the étendue and brightness limits of LEDs, enabling LED light source luminance values that are more than 4 times higher than what can be achieved with LEDs so far. In LED front projection systems, green LEDs are the main limiting factor. With our green light emitting modules, peak luminance values well above 1.2 Gnit have been achieved, enabling doubling of the screen brightness of LED based DLP projection systems, and even more when this technology is applied to other colors as well. This light source concept, introduced as the ColorSpark High Lumen Density (HLD) LED technology, enables a breakthrough in the performance of LED-based light engines not only for projection, where >2700 ANSI lm was demonstrated, but for a wide variety of high brightness applications.

  12. Efficient, Scalable Consistency for Highly Fault-Tolerant Storage

    DTIC Science & Technology

    2004-08-01

    Miguel Castro and Rodrigo Rodrigues for making the implementation of BFT publicly available. Contents 1 Introduction 1 1.1 Problem definition... Cabrera and Long 1991] cen- tralize access to a metadata server. IBM’s Storage Tank [Menon et al. 2003] and Lus- tre [Braam 2004] replace the central... CABRERA , L.-F. AND LONG, D. D. E. 1991. Swift: using distributed disk striping to provide high I/O data rates. Computing Systems 4, 4, 405–436

  13. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  14. Building and managing high performance, scalable, commodity mass storage systems

    NASA Technical Reports Server (NTRS)

    Lekashman, John

    1998-01-01

    The NAS Systems Division has recently embarked on a significant new way of handling the mass storage problem. One of the basic goals of this new development are to build systems at very large capacity and high performance, yet have the advantages of commodity products. The central design philosophy is to build storage systems the way the Internet was built. Competitive, survivable, expandable, and wide open. The thrust of this paper is to describe the motivation for this effort, what we mean by commodity mass storage, what the implications are for a facility that performs such an action, and where we think it will lead.

  15. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  16. Scalable Production of Sensor Arrays Based on High-Mobility Hybrid Graphene Field Effect Transistors.

    PubMed

    Gao, Zhaoli; Kang, Hojin; Naylor, Carl H; Streller, Frank; Ducos, Pedro; Serrano, Madeline D; Ping, Jinglei; Zauberman, Jonathan; Rajesh; Carpick, Robert W; Wang, Ying-Jun; Park, Yung Woo; Luo, Zhengtang; Ren, Li; Johnson, A T Charlie

    2016-10-07

    We have developed a scalable fabrication process for the production of DNA biosensors based on gold nanoparticle-decorated graphene field effect transistors (AuNP-Gr-FETs), where monodisperse AuNPs are created through physical vapor deposition followed by thermal annealing. The FETs are created in a four-probe configuration, using an optimized bilayer photolithography process that yields chemically clean devices, as confirmed by XPS and AFM, with high carrier mobility (3590 ± 710 cm(2)/V·s) and low unintended doping (Dirac voltages of 9.4 ± 2.7 V). The AuNP-Gr-FETs were readily functionalized with thiolated probe DNA to yield DNA biosensors with a detection limit of 1 nM and high specificity against noncomplementary DNA. Our work provides a pathway toward the scalable fabrication of high-performance AuNP-Gr-FET devices for label-free nucleic acid testing in a realistic clinical setting.

  17. Highly scalable digital front end architectures for digital printing

    NASA Astrophysics Data System (ADS)

    Staas, David

    2011-01-01

    HP's digital printing presses consume a tremendous amount of data. The architectures of the Digital Front Ends (DFEs) that feed these large, very fast presses have evolved from basic, single-RIP (Raster Image Processor) systems to multirack, distributed systems that can take a PDF file and deliver data in excess of 3 Gigapixels per second to keep the presses printing at 2000+ pages per minute. This paper highlights some of the more interesting parallelism features of our DFE architectures. The high-performance architecture developed over the last 5+ years can scale up to HP's largest digital press, out to multiple mid-range presses, and down into a very low-cost single box deployment for low-end devices as appropriate. Principles of parallelism pervade every aspect of the architecture, from the lowest-level elements of jobs to parallel imaging pipelines that feed multiple presses. From cores to threads to arrays to network teams to distributed machines, we use a systematic approach to move bottlenecks. The ultimate goals of these efforts are: to take the best advantage of the prevailing hardware options at our disposal; to reduce power consumption and cooling requirements; and to ultimately reduce the cost of the solution to our customers.

  18. Scalable, high performance, enzymatic cathodes based on nanoimprint lithography.

    PubMed

    Pankratov, Dmitry; Sundberg, Richard; Sotres, Javier; Suyatin, Dmitry B; Maximov, Ivan; Shleev, Sergey; Montelius, Lars

    2015-01-01

    Here we detail high performance, enzymatic electrodes for oxygen bio-electroreduction, which can be easily and reproducibly fabricated with industry-scale throughput. Planar and nanostructured electrodes were built on biocompatible, flexible polymer sheets, while nanoimprint lithography was used for electrode nanostructuring. To the best of our knowledge, this is one of the first reports concerning the usage of nanoimprint lithography for amperometric bioelectronic devices. The enzyme (Myrothecium verrucaria bilirubin oxidase) was immobilised on planar (control) and artificially nanostructured, gold electrodes by direct physical adsorption. The detailed electrochemical investigation of bioelectrodes was performed and the following parameters were obtained: open circuit voltage of approximately 0.75 V, and maximum bio-electrocatalytic current densities of 18 µA/cm(2) and 58 µA/cm(2) in air-saturated buffers versus 48 µA/cm(2) and 186 µA/cm(2) in oxygen-saturated buffers for planar and nanostructured electrodes, respectively. The half-deactivation times of planar and nanostructured biocathodes were measured to be 2 h and 14 h, respectively. The comparison of standard heterogeneous and bio-electrocatalytic rate constants showed that the improved bio-electrocatalytic performance of the nanostructured biocathodes compared to planar biodevices is due to the increased surface area of the nanostructured electrodes, whereas their improved operational stability is attributed to stabilisation of the enzyme inside nanocavities.

  19. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  20. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  1. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl’s law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  2. Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires

    SciTech Connect

    Tarsa, Eric

    2015-08-31

    During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimally distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.

  3. Towards a highly-scalable wireless implantable system-on-a-chip for gastric electrophysiology.

    PubMed

    Ibrahim, Ahmed; Farajidavar, Aydin; Kiani, Mehdi

    2015-08-01

    This paper presents the system design of a highly-scalable system-on-a-chip (SoC) to wirelessly and chronically detect the mechanisms underlying gastric dysrhythmias. The proposed wireless implantable gastric-wave recording (WIGR) SoC records gastric slow-wave and spike activities from 256 sites, and establishes transcutaneous data communication with an external reader while being inductively powered. The SoC is highly scalable by employing a modular architecture for the analog front-end (AFE), a near-field pulse-delay modulation (PDM) data transmitter (Tx) that its data rate is proportional to the power carrier frequency (fp), and an adaptive power management equipped with automatic-resonance tuning (ART) that dynamically compensates for environmental and fp variations of the implant power coil. The simulation and measurement results for individual blocks have been presented.

  4. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    DTIC Science & Technology

    2017-04-19

    percentages across 3 different cameras (with resolutions as in Table I) are shown in Fig. 2. Fig. 2. Split up of time for an anlytic run It is...Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics George Mathew Lincoln Laboratory Massachusetts...beginning in one pass. Using video- summarization, the time to capture frames from the VMS and time for processing the frames in a single thread as relative

  5. Volume-scalable high-brightness three-dimensional visible light source

    SciTech Connect

    Subramania, Ganapathi; Fischer, Arthur J; Wang, George T; Li, Qiming

    2014-02-18

    A volume-scalable, high-brightness, electrically driven visible light source comprises a three-dimensional photonic crystal (3DPC) comprising one or more direct bandgap semiconductors. The improved light emission performance of the invention is achieved based on the enhancement of radiative emission of light emitters placed inside a 3DPC due to the strong modification of the photonic density-of-states engendered by the 3DPC.

  6. Scalable multiplexed detector system for high-rate telecom-band single-photon detection.

    PubMed

    Brida, G; Degiovanni, I P; Piacentini, F; Schettini, V; Polyakov, S V; Migdall, A

    2009-11-01

    We present an actively multiplexed photon-counting detection system at telecom wavelengths that overcomes the difficulties of photon-counting at high rates. We find that for gated detectors, the heretofore unconsidered deadtime associated with the detector gate is a critical parameter, that limits the overall scalability of the scheme to just a few detectors. We propose and implement a new scheme that overcomes this problem and restores full scalability that allows an order of magnitude improvement with systems with as few as 4 detectors. When using just two multiplexed detectors, our experimental results show a 5x improvement over a single detector and a greater than 2x improvement over multiplexed schemes that do not consider gate deadtime.

  7. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    SciTech Connect

    Widener, Patrick; Jaconette, Steven; Bridges, Patrick G.; Xia, Lei; Dinda, Peter; Cui, Zheng.; Lange, John; Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  8. A Novel Motion Field Anchoring Paradigm for Highly Scalable Wavelet-Based Video Coding.

    PubMed

    Rufenacht, Dominic; Mathew, Reji; Taubman, David

    2016-01-01

    Existing video coders anchor motion fields at frames that are to be predicted. In this paper, we demonstrate how changing the anchoring of motion fields to reference frames has some important advantages over conventional anchoring. We work with piecewise-smooth motion fields, and use breakpoints to signal discontinuities at moving object boundaries. We show how discontinuity information can be used to resolve double mappings arising when motion is warped from reference to target frames. We present an analytical model that allows to determine weights for texture, motion, and breakpoints to guide the rate-allocation for scalable encoding. Compared with the conventional way of anchoring motion fields, the proposed scheme requires fewer bits for the coding of motion; furthermore, the reconstructed video frames contain fewer ghosting artefacts. The experimental results show the superior performance compared with the traditional anchoring, and demonstrate the high scalability attributes of the proposed method.

  9. A Scalable Epitope Tagging Approach for High Throughput ChIP-Seq Analysis.

    PubMed

    Xiong, Xiong; Zhang, Yanxiao; Yan, Jian; Jain, Surbhi; Chee, Sora; Ren, Bing; Zhao, Huimin

    2017-06-16

    Eukaryotic transcriptional factors (TFs) typically recognize short genomic sequences alone or together with other proteins to modulate gene expression. Mapping of TF-DNA interactions in the genome is crucial for understanding the gene regulatory programs in cells. While chromatin immunoprecipitation followed by sequencing (ChIP-Seq) is commonly used for this purpose, its application is severely limited by the availability of suitable antibodies for TFs. To overcome this limitation, we developed an efficient and scalable strategy named cmChIP-Seq that combines the clustered regularly interspaced short palindromic repeats (CRISPR) technology with microhomology mediated end joining (MMEJ) to genetically engineer a TF with an epitope tag. We demonstrated the utility of this tool by applying it to four TFs in a human colorectal cancer cell line. The highly scalable procedure makes this strategy ideal for ChIP-Seq analysis of TFs in diverse species and cell types.

  10. Scalable high-precision tuning of photonic resonators by resonant cavity-enhanced photoelectrochemical etching

    PubMed Central

    Gil-Santos, Eduardo; Baker, Christopher; Lemaître, Aristide; Gomez, Carmen; Leo, Giuseppe; Favero, Ivan

    2017-01-01

    Photonic lattices of mutually interacting indistinguishable cavities represent a cornerstone of collective phenomena in optics and could become important in advanced sensing or communication devices. The disorder induced by fabrication technologies has so far hindered the development of such resonant cavity architectures, while post-fabrication tuning methods have been limited by complexity and poor scalability. Here we present a new simple and scalable tuning method for ensembles of microphotonic and nanophotonic resonators, which enables their permanent collective spectral alignment. The method introduces an approach of cavity-enhanced photoelectrochemical etching in a fluid, a resonant process triggered by sub-bandgap light that allows for high selectivity and precision. The technique is presented on a gallium arsenide nanophotonic platform and illustrated by finely tuning one, two and up to five resonators. It opens the way to applications requiring large networks of identical resonators and their spectral referencing to external etalons. PMID:28117394

  11. Scalable high-precision tuning of photonic resonators by resonant cavity-enhanced photoelectrochemical etching

    NASA Astrophysics Data System (ADS)

    Gil-Santos, Eduardo; Baker, Christopher; Lemaître, Aristide; Gomez, Carmen; Leo, Giuseppe; Favero, Ivan

    2017-01-01

    Photonic lattices of mutually interacting indistinguishable cavities represent a cornerstone of collective phenomena in optics and could become important in advanced sensing or communication devices. The disorder induced by fabrication technologies has so far hindered the development of such resonant cavity architectures, while post-fabrication tuning methods have been limited by complexity and poor scalability. Here we present a new simple and scalable tuning method for ensembles of microphotonic and nanophotonic resonators, which enables their permanent collective spectral alignment. The method introduces an approach of cavity-enhanced photoelectrochemical etching in a fluid, a resonant process triggered by sub-bandgap light that allows for high selectivity and precision. The technique is presented on a gallium arsenide nanophotonic platform and illustrated by finely tuning one, two and up to five resonators. It opens the way to applications requiring large networks of identical resonators and their spectral referencing to external etalons.

  12. CGLX: a scalable, high-performance visualization framework for networked display environments.

    PubMed

    Doerr, Kai-Uwe; Kuester, Falko

    2011-03-01

    The Cross Platform Cluster Graphics Library (CGLX) is a flexible and transparent OpenGL-based graphics framework for distributed, high-performance visualization systems. CGLX allows OpenGL based applications to utilize massively scalable visualization clusters such as multiprojector or high-resolution tiled display environments and to maximize the achievable performance and resolution. The framework features a programming interface for hardware-accelerated rendering of OpenGL applications on visualization clusters, mimicking a GLUT-like (OpenGL-Utility-Toolkit) interface to enable smooth translation of single-node applications to distributed parallel rendering applications. CGLX provides a unified, scalable, distributed OpenGL context to the user by intercepting and manipulating certain OpenGL directives. CGLX's interception mechanism, in combination with the core functionality for users to register callbacks, enables this framework to manage a visualization grid without additional implementation requirements to the user. Although CGLX grants access to its core engine, allowing users to change its default behavior, general development can occur in the context of a standalone desktop. The framework provides an easy-to-use graphical user interface (GUI) and tools to test, setup, and configure a visualization cluster. This paper describes CGLX's architecture, tools, and systems components. We present performance and scalability tests with different types of applications, and we compare the results with a Chromium-based approach.

  13. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  14. Scalable fabrication of high-quality, ultra-thin single crystal diamond membrane windows

    NASA Astrophysics Data System (ADS)

    Piracha, Afaq Habib; Ganesan, Kumaravelu; Lau, Desmond W. M.; Stacey, Alastair; McGuinness, Liam P.; Tomljenovic-Hanic, Snjezana; Prawer, Steven

    2016-03-01

    High quality, ultra-thin single crystal diamond (SCD) membranes that have a thickness in the sub-micron range are of extreme importance as a materials platform for photonics, quantum sensing, nano/micro electro-mechanical systems (N/MEMS) and other diverse applications. However, the scalable fabrication of such thin SCD membranes is a challenging process. In this paper, we demonstrate a new method which enables high quality, large size (~4 × 4 mm) and low surface roughness, low strain, ultra-thin SCD membranes which can be fabricated without deformations such as breakage, bowing or bending. These membranes are easy to handle making them particularly suitable for fabrication of optical and mechanical devices. We demonstrate arrays of single crystal diamond membrane windows (SCDMW), each up to 1 × 1 mm in dimension and as thin as ~300 nm, supported by a diamond frame as thick as ~150 μm. The fabrication method is robust, reproducible, scalable and cost effective. Microwave plasma chemical vapour deposition is used for in situ creation of single nitrogen-vacancy (NV) centers into the thin SCDMW. We have also developed SCD drum head mechanical resonator composed of our fully clamped and freely suspended membranes.High quality, ultra-thin single crystal diamond (SCD) membranes that have a thickness in the sub-micron range are of extreme importance as a materials platform for photonics, quantum sensing, nano/micro electro-mechanical systems (N/MEMS) and other diverse applications. However, the scalable fabrication of such thin SCD membranes is a challenging process. In this paper, we demonstrate a new method which enables high quality, large size (~4 × 4 mm) and low surface roughness, low strain, ultra-thin SCD membranes which can be fabricated without deformations such as breakage, bowing or bending. These membranes are easy to handle making them particularly suitable for fabrication of optical and mechanical devices. We demonstrate arrays of single crystal diamond

  15. Highly sensitive and scalable AAO-based nano-fibre SERS substrate for sensing application

    NASA Astrophysics Data System (ADS)

    Lim, L. K.; Ng, B. K.; Fu, C. Y.; Tobing, Landobasa Y. M.; Zhang, D. H.

    2017-06-01

    Well-ordered periodic nanostructures are excellent substrates for many surface-enhanced Raman spectroscopy (SERS) applications. Conventional fabrication approaches such as high precision electron beam lithography or focused ion beam produce high resolution nano-features with great reproducibility at the expense of low throughput. In this work, a highly sensitive and scalable AAO-nano-fibre (ANF) SERS substrate is demonstrated by optimising the second anodisation time of the standard two-step anodisation of aluminium and performing an additional wet etching step on the resulting AAO substrate. The optimised ANF substrate exhibits SERS sensitivity that surpasses the AAO nanoholes and the metal-film-on-nanoparticles substrates. A detection limit of 0.1 nM is achieved with a signal-to-noise ratio of 2.6-3 using a low excitation power of 0.1 mW. The ANF substrate exhibits an enhancement factor of 9.28 × 106 and a standard deviation of no more than 8%. The results indicate that the highly sensitive and scalable ANF substrate is a promising substrate for commercial SERS application.

  16. Wafer-scalable high-performance CVD graphene devices and analog circuits

    NASA Astrophysics Data System (ADS)

    Tao, Li; Lee, Jongho; Li, Huifeng; Piner, Richard; Ruoff, Rodney; Akinwande, Deji

    2013-03-01

    Graphene field effect transistors (GFETs) will serve as an essential component for functional modules like amplifier and frequency doublers in analog circuits. The performance of these modules is directly related to the mobility of charge carriers in GFETs, which per this study has been greatly improved. Low-field electrostatic measurements show field mobility values up to 12k cm2/Vs at ambient conditions with our newly developed scalable CVD graphene. For both hole and electron transport, fabricated GFETs offer substantial amplification for small and large signals at quasi-static frequencies limited only by external capacitances at high-frequencies. GFETs biased at the peak transconductance point featured high small-signal gain with eventual output power compression similar to conventional transistor amplifiers. GFETs operating around the Dirac voltage afforded positive conversion gain for the first time, to our knowledge, in experimental graphene frequency doublers. This work suggests a realistic prospect for high performance linear and non-linear analog circuits based on the unique electron-hole symmetry and fast transport now accessible in wafer-scalable CVD graphene. *Support from NSF CAREER award (ECCS-1150034) and the W. M. Keck Foundation are appreicated.

  17. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  18. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes- neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  19. Scalable fabrication of high-quality, ultra-thin single crystal diamond membrane windows.

    PubMed

    Piracha, Afaq Habib; Ganesan, Kumaravelu; Lau, Desmond W M; Stacey, Alastair; McGuinness, Liam P; Tomljenovic-Hanic, Snjezana; Prawer, Steven

    2016-03-28

    High quality, ultra-thin single crystal diamond (SCD) membranes that have a thickness in the sub-micron range are of extreme importance as a materials platform for photonics, quantum sensing, nano/micro electro-mechanical systems (N/MEMS) and other diverse applications. However, the scalable fabrication of such thin SCD membranes is a challenging process. In this paper, we demonstrate a new method which enables high quality, large size (∼4 × 4 mm) and low surface roughness, low strain, ultra-thin SCD membranes which can be fabricated without deformations such as breakage, bowing or bending. These membranes are easy to handle making them particularly suitable for fabrication of optical and mechanical devices. We demonstrate arrays of single crystal diamond membrane windows (SCDMW), each up to 1 × 1 mm in dimension and as thin as ∼300 nm, supported by a diamond frame as thick as ∼150 μm. The fabrication method is robust, reproducible, scalable and cost effective. Microwave plasma chemical vapour deposition is used for in situ creation of single nitrogen-vacancy (NV) centers into the thin SCDMW. We have also developed SCD drum head mechanical resonator composed of our fully clamped and freely suspended membranes.

  20. Scalable Growth of High Mobility Dirac Semimetal Cd3As2 Microbelts.

    PubMed

    Chen, Zhi-Gang; Zhang, Cheng; Zou, Yichao; Zhang, Enze; Yang, Lei; Hong, Min; Xiu, Faxian; Zou, Jin

    2015-09-09

    Three-dimensional (3D) Dirac semimetals are 3D analogues of graphene, which display Dirac points with linear dispersion in k-space, stabilized by crystal symmetry. Cd3As2 has been predicted to be 3D Dirac semimetals and was subsequently demonstrated by angle-resolved photoemission spectroscopy. As unveiled by transport measurements, several exotic phases, such as Weyl semimetals, topological insulators, and topological superconductors, can be deduced by breaking time reversal or inversion symmetry. Here, we reported a facile and scalable chemical vapor deposition method to fabricate high-quality Dirac semimetal Cd3As2 microbelts; they have shown ultrahigh mobility up to 1.15 × 10(5) cm(2) V(-1) s(-1) and pronounced Shubnikov-de Haas oscillations. Such extraordinary features are attributed to the suppression of electron backscattering. This research opens a new avenue for the scalable fabrication of Cd3As2 materials toward exciting electronic applications of 3D Dirac semimetals.

  1. Construction of a Smart Medication Dispenser with High Degree of Scalability and Remote Manageability

    PubMed Central

    Pak, JuGeon; Park, KeeHyun

    2012-01-01

    We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT). In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably. PMID:22899886

  2. Thermally efficient and highly scalable In2Se3 nanowire phase change memory

    NASA Astrophysics Data System (ADS)

    Jin, Bo; Kang, Daegun; Kim, Jungsik; Meyyappan, M.; Lee, Jeong-Soo

    2013-04-01

    The electrical characteristics of nonvolatile In2Se3 nanowire phase change memory are reported. Size-dependent memory switching behavior was observed in nanowires of varying diameters and the reduction in set/reset threshold voltage was as low as 3.45 V/6.25 V for a 60 nm nanowire, which is promising for highly scalable nanowire memory applications. Also, size-dependent thermal resistance of In2Se3 nanowire memory cells was estimated with values as high as 5.86×1013 and 1.04×106 K/W for a 60 nm nanowire memory cell in amorphous and crystalline phases, respectively. Such high thermal resistances are beneficial for improvement of thermal efficiency and thus reduction in programming power consumption based on Fourier's law. The evaluation of thermal resistance provides an avenue to develop thermally efficient memory cell architecture.

  3. Highly-scalable disruptive reading and restoring scheme for Gb-scale SPRAM and beyond

    NASA Astrophysics Data System (ADS)

    Takemura, R.; Kawahara, T.; Ono, K.; Miura, K.; Matsuoka, H.; Ohno, H.

    2011-04-01

    We propose a disruptive reading and restoration scheme for a high density spin-transfer-torque random access memory (SPRAM). The proposed scheme uses the feature that - with a desired error rate and a tunnel magneto resistance (TMR) device, which is the memory device of the SPRAM - does not switch its magnetization of free layer in a specific period of large current pulse. The restoration operation is performed to secure the storing data. As a result, by keeping good scalability of spin-transfer-torque writing toward Gb-scale and beyond, high-speed reading with read-disturbance-free operation can be achieved. This operation also enables the SPRAM to accept the DDRx-SDRAM compatible operation. In addition, we also proposed a 4- F2 cell structure with a vertical transistor and prospected the reliability of a tunnel barrier of the TMR devices for a Gb-scale SPRAM.

  4. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    SciTech Connect

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.; /Johns Hopkins U.

    2004-09-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan.

  5. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  6. Scalable High Performance Message Passing over InfiniBand for Open MPI

    SciTech Connect

    Friedley, A; Hoefler, T; Leininger, M L; Lumsdaine, A

    2007-10-24

    InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless unreliable datagram transport (UD), which allows for near-constant resource usage and initialization overhead as the process count increases. This paper describes a UD-based implementation for IB in Open MPI as a scalable alternative to existing RC-based schemes. We use the software reliability capabilities of Open MPI to provide the guaranteed delivery semantics required by MPI. Results show that UD not only requires fewer resources at scale, but also allows for shorter MPI startup times. A connectionless model also improves performance for applications that tend to send small messages to many different processes.

  7. Highly Efficient and Scalable Separation of Semiconducting Carbon Nanotubes via Weak Field Centrifugation

    PubMed Central

    Reis, Wieland G.; Weitz, R. Thomas; Kettner, Michel; Kraus, Alexander; Schwab, Matthias Georg; Tomović, Željko; Krupke, Ralph; Mikhael, Jules

    2016-01-01

    The identification of scalable processes that transfer random mixtures of single-walled carbon nanotubes (SWCNTs) into fractions featuring a high content of semiconducting species is crucial for future application of SWCNTs in high-performance electronics. Herein we demonstrate a highly efficient and simple separation method that relies on selective interactions between tailor-made amphiphilic polymers and semiconducting SWCNTs in the presence of low viscosity separation media. High purity individualized semiconducting SWCNTs or even self-organized semiconducting sheets are separated from an as-produced SWCNT dispersion via a single weak field centrifugation run. Absorption and Raman spectroscopy are applied to verify the high purity of the obtained SWCNTs. Furthermore SWCNT - network field-effect transistors were fabricated, which exhibit high ON/OFF ratios (105) and field-effect mobilities (17 cm2/Vs). In addition to demonstrating the feasibility of high purity separation by a novel low complexity process, our method can be readily transferred to large scale production. PMID:27188435

  8. Highly Efficient and Scalable Separation of Semiconducting Carbon Nanotubes via Weak Field Centrifugation

    NASA Astrophysics Data System (ADS)

    Reis, Wieland G.; Weitz, R. Thomas; Kettner, Michel; Kraus, Alexander; Schwab, Matthias Georg; Tomović, Željko; Krupke, Ralph; Mikhael, Jules

    2016-05-01

    The identification of scalable processes that transfer random mixtures of single-walled carbon nanotubes (SWCNTs) into fractions featuring a high content of semiconducting species is crucial for future application of SWCNTs in high-performance electronics. Herein we demonstrate a highly efficient and simple separation method that relies on selective interactions between tailor-made amphiphilic polymers and semiconducting SWCNTs in the presence of low viscosity separation media. High purity individualized semiconducting SWCNTs or even self-organized semiconducting sheets are separated from an as-produced SWCNT dispersion via a single weak field centrifugation run. Absorption and Raman spectroscopy are applied to verify the high purity of the obtained SWCNTs. Furthermore SWCNT - network field-effect transistors were fabricated, which exhibit high ON/OFF ratios (105) and field-effect mobilities (17 cm2/Vs). In addition to demonstrating the feasibility of high purity separation by a novel low complexity process, our method can be readily transferred to large scale production.

  9. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  10. Investigation on scalable high-power lasers with enhanced 'eye-safety' for future weapon systems

    NASA Astrophysics Data System (ADS)

    Bigotta, S.; Diener, K.; Eichhorn, M.; Galecki, L.; Geiss, L.; Ibach, T.; Scharf, H.; von Salisch, M.; Schöner, J.; Vincent, G.

    2016-10-01

    The possible use of lasers as weapons becomes more and more interesting for military forces. Besides the generation of high laser power and good beam quality, also safety considerations, e. g. concerning eye hazards, are of importance. The MELIAS (medium energy laser in the "eye-safe" spectral domain) project of ISL addresses these issues, and ISL has developed the most powerful solid-state laser in the "eye-safe" wavelength region up to now. "Eye safety" in this context means that light at a wavelength of > 1.4 μm does not penetrate the eye and thus will not be focused onto the retina. The basic principle of this technology is that a laser source needs to be scalable in power to far beyond 100 kW without a significant deterioration in beam quality. ISL has studied a very promising laser technology: the erbium heat-capacity laser. This type of laser is characterised by a compact design, a simple and robust technology and a scaling law which, in principle, allows the generation of laser power far beyond megawatts at small volumes. Previous investigations demonstrated the scalability of the SSHCL and up to 4.65 kW and 440 J in less than 800 ms have been obtained. Opticalto- optical efficiencies of over 41% and slope efficiencies of over 51% are obtained. The residual thermal gradients, due to non perfect pumping homogeneity, negatively affect the performance in terms of laser pulse energy, duration and beam quality. In the course of the next two years, ISL will be designing a 25 to 30 kW erbium heat-capacity laser.

  11. Isolation of urinary exosomes for RNA biomarker discovery using a simple, fast, and highly scalable method.

    PubMed

    Alvarez, M Lucrecia

    2014-01-01

    Urinary exosomes are nanovesicles (40-100 nm) of endocytic origin that are secreted into the urine when a multivesicular body fuses with the membrane of cells from all nephron segments. Interest in urinary exosomes intensified after the discovery that they contain not only protein and mRNA but also microRNA (miRNA) markers of renal dysfunction and structural injury. Currently, the most widely used protocol for the isolation of urinary exosomes is based on ultracentrifugation, a method that is time consuming, requires expensive equipment, and has low scalability, which limits its applicability in the clinical practice. In this chapter, a simple, fast, and highly scalable step-by-step method for isolation of urinary exosomes is described. This method starts with a 10-min centrifugation of 10 ml urine, then the supernatant is saved (SN1), and the pellet is treated with dithiothreitol and heat to release and recover those exosomes entrapped by polymeric Tamm-Horsfall protein. The treated pellet is then resuspended and centrifuged, and the supernatant obtained (SN2) is combined with the first supernatant, SN1. Next, 3.3 ml of ExoQuick-TC, a commercial exosome precipitation reagent, is added to the total supernatant (SN1 + SN2), mixed well, and saved for at least 12 h at 4 °C. Finally, a pellet of exosomes is obtained after a 30-min centrifugation of the supernatant/ExoQuick-TC mix. We previously compared this method with five others used to isolate urinary exosomes and found that this is the simplest, fastest, and most effective alternative to ultracentrifugation-based protocols if the goal of the study is RNA profiling. A method for isolation and quantification of miRNAs and mRNAs from urinary exosomes is also described here. In addition, we provide a step-by-step description of exosomal miRNA profiling using universal reverse transcription and SYBR qPCR.

  12. Scalable fabrication of micron-scale graphene nanomeshes for high-performance supercapacitor applications

    DOE PAGES

    Kim, Hyun-Kyung; Bak, Seong-Min; Lee, Suk Woo; ...

    2016-01-27

    Graphene nanomeshes (GNMs) with nanoscale periodic or quasi-periodic nanoholes have attracted considerable interest because of unique features such as their open energy band gap, enlarged specific surface area, and high optical transmittance. These features are useful for applications in semiconducting devices, photocatalysis, sensors, and energy-related systems. We report on the facile and scalable preparation of multifunctional micron-scale GNMs with high-density of nanoperforations by catalytic carbon gasification. The catalytic carbon gasification process induces selective decomposition on the graphene adjacent to the metal catalyst, thus forming nanoperforations. Furthermore, the pore size, pore density distribution, and neck size of the GNMs can bemore » controlled by adjusting the size and fraction of the metal oxide on graphene. The fabricated GNM electrodes exhibit superior electrochemical properties for supercapacitor (ultracapacitor) applications, including exceptionally high capacitance (253 F g-1 at 1 A g-1) and high rate capability (212 F g-1 at 100 A g-1) with excellent cycle stability (91% of the initial capacitance after 50 000 charge/discharge cycles). Moreover, the edge-enriched structure of GNMs plays an important role in achieving edge-selected and high-level nitrogen doping.« less

  13. Scalable fabrication of micron-scale graphene nanomeshes for high-performance supercapacitor applications

    SciTech Connect

    Kim, Hyun-Kyung; Bak, Seong-Min; Lee, Suk Woo; Kim, Myeong-Seong; Park, Byeongho; Lee, Su Chan; Choi, Yeon Jun; Jun, Seong Chan; Han, Joong Tark; Nam, Kyung-Wan; Chung, Kyung Yoon; Wang, Jian; Zhou, Jigang; Yang, Xiao-Qing; Roh, Kwang Chul; Kim, Kwang-Bum

    2016-01-27

    Graphene nanomeshes (GNMs) with nanoscale periodic or quasi-periodic nanoholes have attracted considerable interest because of unique features such as their open energy band gap, enlarged specific surface area, and high optical transmittance. These features are useful for applications in semiconducting devices, photocatalysis, sensors, and energy-related systems. We report on the facile and scalable preparation of multifunctional micron-scale GNMs with high-density of nanoperforations by catalytic carbon gasification. The catalytic carbon gasification process induces selective decomposition on the graphene adjacent to the metal catalyst, thus forming nanoperforations. Furthermore, the pore size, pore density distribution, and neck size of the GNMs can be controlled by adjusting the size and fraction of the metal oxide on graphene. The fabricated GNM electrodes exhibit superior electrochemical properties for supercapacitor (ultracapacitor) applications, including exceptionally high capacitance (253 F g-1 at 1 A g-1) and high rate capability (212 F g-1 at 100 A g-1) with excellent cycle stability (91% of the initial capacitance after 50 000 charge/discharge cycles). Moreover, the edge-enriched structure of GNMs plays an important role in achieving edge-selected and high-level nitrogen doping.

  14. Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics

    NASA Astrophysics Data System (ADS)

    Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Yun Jaung, Jae; Kim, Yong-Hoon; Kyu Park, Sung

    2015-09-01

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.

  15. Scalable sub-micron patterning of organic materials toward high density soft electronics

    SciTech Connect

    Kim, Jaekyun; Kim, Myung -Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong -Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun -Hi; Noh, Yong -Young; Yun Jaung, Jae; Kim, Yong -Hoon; Kyu Park, Sung

    2015-09-28

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. As a result, the successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.

  16. Scalable sub-micron patterning of organic materials toward high density soft electronics

    DOE PAGES

    Kim, Jaekyun; Kim, Myung -Gil; Kim, Jaehyun; ...

    2015-09-28

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. Inmore » this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. As a result, the successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.« less

  17. Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics.

    PubMed

    Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Jaung, Jae Yun; Kim, Yong-Hoon; Park, Sung Kyu

    2015-09-28

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.

  18. Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics

    PubMed Central

    Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Yun Jaung, Jae; Kim, Yong-Hoon; Kyu Park, Sung

    2015-01-01

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics. PMID:26411932

  19. Scalable Functionalized Graphene Nano-platelets as Tunable Cathodes for High-performance Lithium Rechargeable Batteries

    PubMed Central

    Kim, Haegyeom; Lim, Hee-Dae; Kim, Sung-Wook; Hong, Jihyun; Seo, Dong-Hwa; Kim, Dae-chul; Jeon, Seokwoo; Park, Sungjin; Kang, Kisuk

    2013-01-01

    High-performance and cost-effective rechargeable batteries are key to the success of electric vehicles and large-scale energy storage systems. Extensive research has focused on the development of (i) new high-energy electrodes that can store more lithium or (ii) high-power nano-structured electrodes hybridized with carbonaceous materials. However, the current status of lithium batteries based on redox reactions of heavy transition metals still remains far below the demands required for the proposed applications. Herein, we present a novel approach using tunable functional groups on graphene nano-platelets as redox centers. The electrode can deliver high capacity of ~250 mAh g−1, power of ~20 kW kg−1 in an acceptable cathode voltage range, and provide excellent cyclability up to thousands of repeated charge/discharge cycles. The simple, mass-scalable synthetic route for the functionalized graphene nano-platelets proposed in this work suggests that the graphene cathode can be a promising new class of electrode. PMID:23514953

  20. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  1. A Hardware-Efficient Scalable Spike Sorting Neural Signal Processor Module for Implantable High-Channel-Count Brain Machine Interfaces.

    PubMed

    Yang, Yuning; Boling, Sam; Mason, Andrew J

    2017-08-01

    Next-generation brain machine interfaces demand a high-channel-count neural recording system to wirelessly monitor activities of thousands of neurons. A hardware efficient neural signal processor (NSP) is greatly desirable to ease the data bandwidth bottleneck for a fully implantable wireless neural recording system. This paper demonstrates a complete multichannel spike sorting NSP module that incorporates all of the necessary spike detector, feature extractor, and spike classifier blocks. To meet high-channel-count and implantability demands, each block was designed to be highly hardware efficient and scalable while sharing resources efficiently among multiple channels. To process multiple channels in parallel, scalability analysis was performed, and the utilization of each block was optimized according to its input data statistics and the power, area and/or speed of each block. Based on this analysis, a prototype 32-channel spike sorting NSP scalable module was designed and tested on an FPGA using synthesized datasets over a wide range of signal to noise ratios. The design was mapped to 130 nm CMOS to achieve 0.75 μW power and 0.023 mm(2) area consumptions per channel based on post synthesis simulation results, which permits scalability of digital processing to 690 channels on a 4×4 mm(2) electrode array.

  2. XGet: a highly scalable and efficient file transfer tool for clusters

    SciTech Connect

    Greenberg, Hugh; Ionkov, Latchesar; Minnich, Ronald

    2008-01-01

    As clusters rapidly grow in size, transferring files between nodes can no longer be solved by the traditional transfer utilities due to their inherent lack of scalability. In this paper, we describe a new file transfer utility called XGet, which was designed to address the scalability problem of standard tools. We compared XGet against four transfer tools: Bittorrent, Rsync, TFTP, and Udpcast and our results show that XGet's performance is superior to the these utilities in many cases.

  3. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    NASA Astrophysics Data System (ADS)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  4. High-performance graphene-based supercapacitors made by a scalable blade-coating approach

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Liu, Jinzhang; Mirri, Francesca; Pasquali, Matteo; Motta, Nunzio; Holmes, John W.

    2016-04-01

    Graphene oxide (GO) sheets can form liquid crystals (LCs) in their aqueous dispersions that are more viscous with a stronger LC feature. In this work we combine the viscous LC-GO solution with the blade-coating technique to make GO films, for constructing graphene-based supercapacitors in a scalable way. Reduced GO (rGO) films are prepared by wet chemical methods, using either hydrazine (HZ) or hydroiodic acid (HI). Solid-state supercapacitors with rGO films as electrodes and highly conductive carbon nanotube films as current collectors are fabricated and the capacitive properties of different rGO films are compared. It is found that the HZ-rGO film is superior to the HI-rGO film in achieving high capacitance, owing to the 3D structure of graphene sheets in the electrode. Compared to gelled electrolyte, the use of liquid electrolyte (H2SO4) can further increase the capacitance to 265 F per gram (corresponding to 52 mF per cm2) of the HZ-rGO film.

  5. Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres.

    PubMed

    Banerjee, Arindam; Ghosh, Joydeep

    2004-05-01

    Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.

  6. Very High Resolution Mapping of Tree Cover Using Scalable Deep Learning Architectures

    NASA Astrophysics Data System (ADS)

    ganguly, sangram; basu, saikat; nemani, ramakrishna; mukhopadhyay, supratik; michaelis, andrew; votava, petr; saatchi, sassan

    2016-04-01

    Several studies to date have provided an extensive knowledge base for estimating forest aboveground biomass (AGB) and recent advances in space-based modeling of the 3-D canopy structure, combined with canopy reflectance measured by passive optical sensors and radar backscatter, are providing improved satellite-derived AGB density mapping for large scale carbon monitoring applications. A key limitation in forest AGB estimation from remote sensing, however, is the large uncertainty in forest cover estimates from the coarse-to-medium resolution satellite-derived land cover maps (present resolution is limited to 30-m of the USGS NLCD Program). As part of our NASA Carbon Monitoring System Phase II activities, we have demonstrated that uncertainties in forest cover estimates at the Landsat scale result in high uncertainties in AGB estimation, predominantly in heterogeneous forest and urban landscapes. We have successfully tested an approach using scalable deep learning architectures (Feature-enhanced Deep Belief Networks and Semantic Segmentation using Convolutional Neural Networks) and High-Performance Computing with NAIP air-borne imagery data for mapping tree cover at 1-m over California and Maryland. Our first high resolution satellite training label dataset from the NAIP data can be found here at http://csc.lsu.edu/~saikat/deepsat/ . In a comparison with high resolution LiDAR data available over selected regions in the two states, we found our results to be promising both in terms of accuracy as well as our ability to scale nationally. In this project, we propose to estimate very high resolution forest cover for the continental US at spatial resolution of 1-m in support of reducing uncertainties in the AGB estimation. The proposed work will substantially contribute to filling the gaps in ongoing carbon monitoring research and help quantifying the errors and uncertainties in related carbon products.

  7. Perovskite ink with wide processing window for scalable high-efficiency solar cells

    NASA Astrophysics Data System (ADS)

    Yang, Mengjin; Li, Zhen; Reese, Matthew O.; Reid, Obadiah G.; Kim, Dong Hoe; Siol, Sebastian; Klein, Talysa R.; Yan, Yanfa; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2017-03-01

    Perovskite solar cells have made tremendous progress using laboratory-scale spin-coating methods in the past few years owing to advances in controls of perovskite film deposition. However, devices made via scalable methods are still lagging behind state-of-the-art spin-coated devices because of the complicated nature of perovskite crystallization from a precursor state. Here we demonstrate a chlorine-containing methylammonium lead iodide precursor formulation along with solvent tuning to enable a wide precursor-processing window (up to ˜8 min) and a rapid grain growth rate (as short as ˜1 min). Coupled with antisolvent extraction, this precursor ink delivers high-quality perovskite films with large-scale uniformity. The ink can be used by both spin-coating and blade-coating methods with indistinguishable film morphology and device performance. Using a blade-coated absorber, devices with 0.12-cm2 and 1.2-cm2 areas yield average efficiencies of 18.55% and 17.33%, respectively. We further demonstrate a 12.6-cm2 four-cell module (88% geometric fill factor) with 13.3% stabilized active-area efficiency output.

  8. High-Sensitivity Charge Detection with a Single-Lead Quantum Dot for Scalable Quantum Computation

    NASA Astrophysics Data System (ADS)

    House, M. G.; Bartlett, I.; Pakkiam, P.; Koch, M.; Peretz, E.; van der Heijden, J.; Kobayashi, T.; Rogge, S.; Simmons, M. Y.

    2016-10-01

    We report the development of a high-sensitivity semiconductor charge sensor based on a quantum dot coupled to a single lead designed to minimize the geometric requirements of a charge sensor for scalable quantum-computing architectures. The quantum dot is fabricated in Si:P using atomic precision lithography, and its charge transitions are measured with rf reflectometry. A second quantum dot with two leads placed 42 nm away serves as both a charge for the sensor to measure and as a conventional rf single-electron transistor (rf SET) with which to make a comparison of the charge-detection sensitivity. We demonstrate sensitivity equivalent to an integration time of 550 ns to detect a single charge with a signal-to-noise ratio of 1 compared with an integration time of 55 ns for the rf SET. This level of sensitivity is suitable for fast (<15 μ s ) single-spin readout in quantum-information applications, with a significantly reduced geometric footprint compared to the rf SET.

  9. ScalaTrace: Scalable Compression and Replay of Communication Traces for High Performance Computing

    SciTech Connect

    Noeth, M; Ratn, P; Mueller, F; Schulz, M; de Supinski, B R

    2008-05-16

    Characterizing the communication behavior of large-scale applications is a difficult and costly task due to code/system complexity and long execution times. While many tools to study this behavior have been developed, these approaches either aggregate information in a lossy way through high-level statistics or produce huge trace files that are hard to handle. We contribute an approach that provides orders of magnitude smaller, if not near-constant size, communication traces regardless of the number of nodes while preserving structural information. We introduce intra- and inter-node compression techniques of MPI events that are capable of extracting an application's communication structure. We further present a replay mechanism for the traces generated by our approach and discuss results of our implementation for BlueGene/L. Given this novel capability, we discuss its impact on communication tuning and beyond. To the best of our knowledge, such a concise representation of MPI traces in a scalable manner combined with deterministic MPI call replay are without any precedent.

  10. pFoF: a highly scalable halo-finder for large cosmological data sets

    NASA Astrophysics Data System (ADS)

    Roy, Fabrice; Bouillot, Vincent R.; Rasera, Yann

    2014-04-01

    We present a parallel implementation of the friends-of-friends algorithm and an innovative technique for reducing complex-shaped data to a user-friendly format. This code, named pFoF, contains an optimized post-processing workflow that reduces the input data coming from gravitational codes, arranges them in a user-friendly format and detects groups of particles using percolation and merging methods. The pFoF code also allows for detecting structures in sub- or non-cubic volumes of the comoving box. In addition, the code offers the possibility of performing new halo-findings with a lower percolation factor, useful for more complex analysis. In this paper, we give standard test results and show performance diagnostics to stress the robustness of pFoF. This code has been extensively tested up to 32768 MPI processes and has proved to be highly scalable with an efficiency of more than 75%. It has been used for analysing the Dark Energy Universe Simulation: Full Universe Runs (DEUS-FUR) project, the first cosmological simulations of the entire observable Universe, modelled with more than half a trillion dark matter particles.

  11. Scalable graphite/copper bishell composite for high-performance interconnects.

    PubMed

    Yeh, Chao-Hui; Medina, Henry; Lu, Chun-Chieh; Huang, Kun-Ping; Liu, Zheng; Suenaga, Kazu; Chiu, Po-Wen

    2014-01-28

    We present the fabrication and characterizations of novel electrical interconnect test lines made of a Cu/graphite bishell composite with the graphite cap layer grown by electron cyclotron resonance chemical vapor deposition. Through this technique, conformal multilayer graphene can be formed on the predeposited Cu interconnects under CMOS-friendly conditions. The low-temperature (400 °C) deposition also renders the process unlimitedly scalable. The graphite layer can boost the current-carrying capacity of the composite structure to 10(8) A/cm(2), more than an order of magnitude higher than that of bare metal lines, and reduces resistivity of fine test lines by ∼10%. Raman measurements reveal that physical breakdown occurs at ∼680-720 °C. Modeling the current vs voltage curves up to breakdown shows that the maximum current density of the composites is limited by self-heating of the graphite, suggesting the strong roles of phonon scattering at high fields and highlighting the significance of a metal counterpart for enhanced thermal dissipation.

  12. A highly scalable massively parallel fast marching method for the Eikonal equation

    NASA Astrophysics Data System (ADS)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  13. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    SciTech Connect

    Snyder, Abigail C.; Jiao, Yu

    2010-10-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  14. Perovskite ink with wide processing window for scalable high-efficiency solar cells

    DOE PAGES

    Yang, Mengjin; Li, Zhen; Reese, Matthew O.; ...

    2017-03-20

    Perovskite solar cells have made tremendous progress using laboratory-scale spin-coating methods in the past few years owing to advances in controls of perovskite film deposition. However, devices made via scalable methods are still lagging behind state-of-the-art spin-coated devices because of the complicated nature of perovskite crystallization from a precursor state. Here we demonstrate a chlorine-containing methylammonium lead iodide precursor formulation along with solvent tuning to enable a wide precursor-processing window (up to ~8 min) and a rapid grain growth rate (as short as ~1 min). Coupled with antisolvent extraction, this precursor ink delivers high-quality perovskite films with large-scale uniformity. Themore » ink can be used by both spin-coating and blade-coating methods with indistinguishable film morphology and device performance. Using a blade-coated absorber, devices with 0.12-cm2 and 1.2-cm2 areas yield average efficiencies of 18.55% and 17.33%, respectively. We further demonstrate a 12.6-cm2 four-cell module (88% geometric fill factor) with 13.3% stabilized active-area efficiency output.« less

  15. WESTPA: an interoperable, highly scalable software package for weighted ensemble simulation and analysis.

    PubMed

    Zwier, Matthew C; Adelman, Joshua L; Kaus, Joseph W; Pratt, Adam J; Wong, Kim F; Rego, Nicholas B; Suárez, Ernesto; Lettieri, Steven; Wang, David W; Grabe, Michael; Zuckerman, Daniel M; Chong, Lillian T

    2015-02-10

    The weighted ensemble (WE) path sampling approach orchestrates an ensemble of parallel calculations with intermittent communication to enhance the sampling of rare events, such as molecular associations or conformational changes in proteins or peptides. Trajectories are replicated and pruned in a way that focuses computational effort on underexplored regions of configuration space while maintaining rigorous kinetics. To enable the simulation of rare events at any scale (e.g., atomistic, cellular), we have developed an open-source, interoperable, and highly scalable software package for the execution and analysis of WE simulations: WESTPA (The Weighted Ensemble Simulation Toolkit with Parallelization and Analysis). WESTPA scales to thousands of CPU cores and includes a suite of analysis tools that have been implemented in a massively parallel fashion. The software has been designed to interface conveniently with any dynamics engine and has already been used with a variety of molecular dynamics (e.g., GROMACS, NAMD, OpenMM, AMBER) and cell-modeling packages (e.g., BioNetGen, MCell). WESTPA has been in production use for over a year, and its utility has been demonstrated for a broad set of problems, ranging from atomically detailed host–guest associations to nonspatial chemical kinetics of cellular signaling networks. The following describes the design and features of WESTPA, including the facilities it provides for running WE simulations and storing and analyzing WE simulation data, as well as examples of input and output.

  16. WESTPA: An interoperable, highly scalable software package for weighted ensemble simulation and analysis

    PubMed Central

    Zwier, Matthew C.; Adelman, Joshua L.; Kaus, Joseph W.; Pratt, Adam J.; Wong, Kim F.; Rego, Nicholas B.; Suárez, Ernesto; Lettieri, Steven; Wang, David W.; Grabe, Michael; Zuckerman, Daniel M.; Chong, Lillian T.

    2015-01-01

    The weighted ensemble (WE) path sampling approach orchestrates an ensemble of parallel calculations with intermittent communication to enhance the sampling of rare events, such as molecular associations or conformational changes in proteins or peptides. Trajectories are replicated and pruned in a way that focuses computational effort on under-explored regions of configuration space while maintaining rigorous kinetics. To enable the simulation of rare events at any scale (e.g. atomistic, cellular), we have developed an open-source, interoperable, and highly scalable software package for the execution and analysis of WE simulations: WESTPA (The Weighted Ensemble Simulation Toolkit with Parallelization and Analysis). WESTPA scales to thousands of CPU cores and includes a suite of analysis tools that have been implemented in a massively parallel fashion. The software has been designed to interface conveniently with any dynamics engine and has already been used with a variety of molecular dynamics (e.g. GROMACS, NAMD, OpenMM, AMBER) and cell-modeling packages (e.g. BioNetGen, MCell). WESTPA has been in production use for over a year, and its utility has been demonstrated for a broad set of problems, ranging from atomically detailed host-guest associations to non-spatial chemical kinetics of cellular signaling networks. The following describes the design and features of WESTPA, including the facilities it provides for running WE simulations, storing and analyzing WE simulation data, as well as examples of input and output. PMID:26392815

  17. Optical design of a scalable imaging system with compact configuration and high fidelity

    NASA Astrophysics Data System (ADS)

    Ji, Yiqun; Chen, Yuheng; Zhou, Jiankang; Chen, Xinhua

    2016-10-01

    Optical design of a novel optical imaging system is presented. It can overcome the scaling of the aberrations by dividing the imaging task between a single objective lens that achieves a partially corrected intermediate image on a spherical surface, and an array of micro-lens, each of which relays a small portion of the intermediate image to its respective sensor, correcting the residual aberrations. The system is aimed for obtaining large field-of-view without deteriorating its resolution, of which traditionally designed optical imaging systems have met great difficult. This progress not only breaks through the traditional restrictions, but also allows a wider application for optical imaging systems. Firstly, proper configuration, which satisfies both the requirement of compactness and high performance, is determined according to the working principle of the novel system and through the research of the design idea in this paper. Then, a design example is presented with the field-of-view 50°and its resolution 0.2mrad, which remains as the field-of-view scales. But the optimized scalable system is of close packed structure and its dimension is less than 300mm along the ray incidence.

  18. ScalaBLAST: A Scalable Implementation of BLAST for High Performance Data-Intensive Bioinformatics Analysis

    SciTech Connect

    Oehmen, Chris S.; Nieplocha, Jarek

    2006-08-01

    Genes in an organism’s DNA (genome) have embedded in them information about proteins, which are the molecules that do most of a cell’s work. A typical bacterial genome contains on the order of 5000 genes. Mammalian genomes can contain hundreds of thousands of genes. For each genome sequenced, the challenge is to identify protein components (proteome) being actively used for a given set of conditions. Fundamentally, sequence alignment is a sequence matching problem focused at unlocking protein information embedded in the genetic code, making it possible to assemble a “tree of life” by comparing new sequences against all sequences from known organisms. But the memory footprint of sequence data is growing more rapidly than per-node core memory. Despite years of research and development, high performance sequence alignment applications either do not scale well, cannot accommodate very large databases in core, or require special hardware. We have developed a high performance sequence alignment application, ScalaBLAST, which accommodates very large databases, and which scales linearly to hundreds of processors on both distributed memory and shared memory architectures, representing a substantial improvement over the current state-of-the-art in high performance sequence alignment with scaling and portability. ScalaBLAST, relies on a collection of innovative techniques -- distributing the target database over available memory, multi-level parallelism to exploit concurrency, parallel I/O, and latency hiding through data prefetching -- to achieve high performance and scalability. This demonstrated approach of database sharing combined with effective task scheduling should have broad ranging applications to other informatics-driven sciences.

  19. Personalised Prescription of Scalable High Intensity Interval Training to Inactive Female Adults of Different Ages

    PubMed Central

    Mair, Jacqueline L.

    2016-01-01

    Stepping is a convenient form of scalable high-intensity interval training (HIIT) that may lead to health benefits. However, the accurate personalised prescription of stepping is hampered by a lack of evidence on optimal stepping cadences and step heights for various populations. This study examined the acute physiological responses to stepping exercise at various heights and cadences in young (n = 14) and middle-aged (n = 14) females in order to develop an equation that facilitates prescription of stepping at targeted intensities. Participants completed a step test protocol consisting of randomised three-minute bouts at different step cadences (80, 90, 100, 110 steps·min-1) and step heights (17, 25, 30, 34 cm). Aerobic demand and heart rate values were measured throughout. Resting metabolic rate was measured in order to develop female specific metabolic equivalents (METs) for stepping. Results revealed significant differences between age groups for METs and heart rate reserve, and within-group differences for METs, heart rate, and metabolic cost, at different step heights and cadences. At a given step height and cadence, middle-aged females were required to work at an intensity on average 1.9 ± 0.26 METs greater than the younger females. A prescriptive equation was developed to assess energy cost in METs using multilevel regression analysis with factors of step height, step cadence and age. Considering recent evidence supporting accumulated bouts of HIIT exercise for health benefits, this equation, which allows HIIT to be personally prescribed to inactive and sedentary women, has potential impact as a public health exercise prescription tool. PMID:26848956

  20. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.

  1. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  2. Implementation of scalable video coding deblocking filter from high-level SystemC description

    NASA Astrophysics Data System (ADS)

    Carballo, Pedro P.; Espino, Omar; Neris, Romén.; Hernández-Fernández, Pedro; Szydzik, Tomasz M.; Núñez, Antonio

    2013-05-01

    This paper describes key concepts in the design and implementation of a deblocking filter (DF) for a H.264/SVC video decoder. The DF supports QCIF and CIF video formats with temporal and spatial scalability. The design flow starts from a SystemC functional model and has been refined using high-level synthesis methodology to RTL microarchitecture. The process is guided with performance measurements (latency, cycle time, power, resource utilization) with the objective of assuring the quality of results of the final system. The functional model of the DF is created in an incremental way from the AVC DF model using OpenSVC source code as reference. The design flow continues with the logic synthesis and the implementation on the FPGA using various strategies. The final implementation is chosen among the implementations that meet the timing constraints. The DF is capable to run at 100 MHz, and macroblocks are processed in 6,500 clock cycles for a throughput of 130 fps for QCIF format and 37 fps for CIF format. The proposed architecture for the complete H.264/SVC decoder is composed of an OMAP 3530 SOC (ARM Cortex-A8 GPP + DSP) and the FPGA Virtex-5 acting as a coprocessor for DF implementation. The DF is connected to the OMAP SOC using the GPMC interface. A validation platform has been developed using the embedded PowerPC processor in the FPGA, composing a SoC that integrates the frame generation and visualization in a TFT screen. The FPGA implements both the DF core and a GPMC slave core. Both cores are connected to the PowerPC440 embedded processor using LocalLink interfaces. The FPGA also contains a local memory capable of storing information necessary to filter a complete frame and to store a decoded picture frame. The complete system is implemented in a Virtex5 FX70T device.

  3. Scalable Synthesis of Defect Abundant Si Nanorods for High-Performance Li-Ion Battery Anodes.

    PubMed

    Wang, Jing; Meng, Xiangcai; Fan, Xiulin; Zhang, Wenbo; Zhang, Hongyong; Wang, Chunsheng

    2015-06-23

    Microsized nanostructured silicon-carbon composite is a promising anode material for high energy Li-ion batteries. However, large-scale synthesis of high-performance nano-Si materials at a low cost still remains a significant challenge. We report a scalable low cost method to synthesize Al/Na-doped and defect-abundant Si nanorods that have excellent electrochemical performance with high first-cycle Coulombic efficiency (90%). The unique Si nanorods are synthesized by acid etching the refined and rapidly solidified eutectic Al-Si ingot. To maintain the high electronic conductivity, a thin layer of carbon is then coated on the Si nanorods by carbonization of self-polymerized polydopamine (PDA) at 800 °C. The carbon coated Si nanorods (Si@C) electrode at 0.9 mg cm(-2) loading (corresponding to area-specific-capacity of ∼2.0 mAh cm(-2)) exhibits a reversible capacity of ∼2200 mAh g(-1) at 100 mA g(-1) current, and maintains ∼700 mAh g(-1) over 1000 cycles at 1000 mA g(-1) with a capacity decay rate of 0.02% per cycle. High Coulombic efficiencies of 87% in the first cycle and ∼99.7% after 5 cycles are achieved due to the formation of an artificial Al2O3 solid electrolyte interphase (SEI) on the Si surface, and the low surface area (31 m(2) g(-1)), which has never been reported before for nano-Si anodes. The excellent electrochemical performance results from the massive defects (twins, stacking faults, dislocations) and Al/Na doping in Si nanorods induced by rapid solidification and Na salt modifications; this greatly enhances the robustness of Si from the volume changes and alleviates the mechanical stress/strain of the Si nanorods during the lithium insertion/extraction process. Introducing massive defects and Al/Na doping in eutectic Si nanorods for Li-ion battery anodes is unexplored territory. We venture this uncharted territory to commercialize this nanostructured Si anode for the next generation of Li-ion batteries.

  4. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW

  5. Ultra-High Performance, High-Temperature Superconducting Wires via Cost-effective, Scalable, Co-evaporation Process

    SciTech Connect

    Kim, Dr. Hosup; Oh, Sang-Soo; Ha, HS; Youm, D; Moon, SH; Kim, JH; Heo, YU; Dou, SX; Wee, Sung Hun; Goyal, Amit

    2014-01-01

    Long-length, high-temperature superconducting (HTS) wires capable of carrying high critical current, Ic, are required for a wide range of applications. Here, we report extremely high performance HTS wires based on 5 m thick SmBa2Cu3O7- (SmBCO) single layer films on textured metallic templates. SmBCO layer wires over 20 meters long were deposited by a cost-effective, scalable co-evaporation process using a batch-type drum in a dual chamber. All deposition parameters influencing the composition, phase, and texture of the films were optimized via a unique combinatorial method that is broadly applicable for co-evaporation of other promising complex materials containing several cations. Thick SmBCO layers deposited under optimized conditions exhibit excellent cube-on-cube epitaxy. Such excellent structural epitaxy over the entire thickness results in exceptionally high Ic performance, with average Ic over 1000 A/cm for the entire 22 meter long wire and maximum Ic over 1,500 A/cm for a short 12 cm long tape. The Ic values reported in this work are the highest values ever reported from any lengths of cuprate-based HTS wire or conductor.

  6. Ultra-High Performance, High-Temperature Superconducting Wires via Cost-effective, Scalable, Co-evaporation Process

    PubMed Central

    Kim, Ho-Sup; Oh, Sang-Soo; Ha, Hong-Soo; Youm, Dojun; Moon, Seung-Hyun; Kim, Jung Ho; Dou, Shi Xue; Heo, Yoon-Uk; Wee, Sung-Hun; Goyal, Amit

    2014-01-01

    Long-length, high-temperature superconducting (HTS) wires capable of carrying high critical current, Ic, are required for a wide range of applications. Here, we report extremely high performance HTS wires based on 5 μm thick SmBa2Cu3O7 − δ (SmBCO) single layer films on textured metallic templates. SmBCO layer wires over 20 meters long were deposited by a cost-effective, scalable co-evaporation process using a batch-type drum in a dual chamber. All deposition parameters influencing the composition, phase, and texture of the films were optimized via a unique combinatorial method that is broadly applicable for co-evaporation of other promising complex materials containing several cations. Thick SmBCO layers deposited under optimized conditions exhibit excellent cube-on-cube epitaxy. Such excellent structural epitaxy over the entire thickness results in exceptionally high Ic performance, with average Ic over 1,000 A/cm-width for the entire 22 meter long wire and maximum Ic over 1,500 A/cm-width for a short 12 cm long tape. The Ic values reported in this work are the highest values ever reported from any lengths of cuprate-based HTS wire or conductor. PMID:24752189

  7. Highly scalable, atomically thin WSe2 grown via metal-organic chemical vapor deposition.

    PubMed

    Eichfeld, Sarah M; Hossain, Lorraine; Lin, Yu-Chuan; Piasecki, Aleksander F; Kupp, Benjamin; Birdwell, A Glen; Burke, Robert A; Lu, Ning; Peng, Xin; Li, Jie; Azcatl, Angelica; McDonnell, Stephen; Wallace, Robert M; Kim, Moon J; Mayer, Theresa S; Redwing, Joan M; Robinson, Joshua A

    2015-02-24

    Tungsten diselenide (WSe2) is a two-dimensional material that is of interest for next-generation electronic and optoelectronic devices due to its direct bandgap of 1.65 eV in the monolayer form and excellent transport properties. However, technologies based on this 2D material cannot be realized without a scalable synthesis process. Here, we demonstrate the first scalable synthesis of large-area, mono and few-layer WSe2 via metal-organic chemical vapor deposition using tungsten hexacarbonyl (W(CO)6) and dimethylselenium ((CH3)2Se). In addition to being intrinsically scalable, this technique allows for the precise control of the vapor-phase chemistry, which is unobtainable using more traditional oxide vaporization routes. We show that temperature, pressure, Se:W ratio, and substrate choice have a strong impact on the ensuing atomic layer structure, with optimized conditions yielding >8 μm size domains. Raman spectroscopy, atomic force microscopy (AFM), and cross-sectional transmission electron microscopy (TEM) confirm crystalline monoto-multilayer WSe2 is achievable. Finally, TEM and vertical current/voltage transport provide evidence that a pristine van der Waals gap exists in WSe2/graphene heterostructures.

  8. Scalable high-power redox capacitors with aligned nanoforests of crystalline MnO₂ nanorods by high voltage electrophoretic deposition.

    PubMed

    Santhanagopalan, Sunand; Balram, Anirudh; Meng, Dennis Desheng

    2013-03-26

    It is commonly perceived that reduction-oxidation (redox) capacitors have to sacrifice power density to achieve higher energy density than carbon-based electric double layer capacitors. In this work, we report the synergetic advantages of combining the high crystallinity of hydrothermally synthesized α-MnO2 nanorods with alignment for high performance redox capacitors. Such an approach is enabled by high voltage electrophoretic deposition (HVEPD) technology which can obtain vertically aligned nanoforests with great process versatility. The scalable nanomanufacturing process is demonstrated by roll-printing an aligned forest of α-MnO2 nanorods on a large flexible substrate (1 inch by 1 foot). The electrodes show very high power density (340 kW/kg at an energy density of 4.7 Wh/kg) and excellent cyclability (over 92% capacitance retention over 2000 cycles). Pretreatment of the substrate and use of a conductive holding layer have also been shown to significantly reduce the contact resistance between the aligned nanoforests and the substrates. High areal specific capacitances of around 8500 μF/cm(2) have been obtained for each electrode with a two-electrode device configuration. Over 93% capacitance retention was observed when the cycling current densities were increased from 0.25 to 10 mA/cm(2), indicating high rate capabilities of the fabricated electrodes and resulting in the very high attainable power density. The high performance of the electrodes is attributed to the crystallographic structure, 1D morphology, aligned orientation, and low contact resistance.

  9. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    NASA Astrophysics Data System (ADS)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  10. Analysis of the scalability of diffraction-limited fiber lasers and amplifiers to high average power.

    PubMed

    Dawson, Jay W; Messerly, Michael J; Beach, Raymond J; Shverdin, Miroslav Y; Stappaerts, Eddy A; Sridharan, Arun K; Pax, Paul H; Heebner, John E; Siders, Craig W; Barty, C P J

    2008-08-18

    We analyze the scalability of diffraction-limited fiber lasers considering thermal, non-linear, damage and pump coupling limits as well as fiber mode field diameter (MFD) restrictions. We derive new general relationships based upon practical considerations. Our analysis shows that if the fiber's MFD could be increased arbitrarily, 36 kW of power could be obtained with diffraction-limited quality from a fiber laser or amplifier. This power limit is determined by thermal and non-linear limits that combine to prevent further power scaling, irrespective of increases in mode size. However, limits to the scaling of the MFD may restrict fiber lasers to lower output powers.

  11. Simulating chemical energies to high precision with fully-scalable quantum algorithms on superconducting qubits

    NASA Astrophysics Data System (ADS)

    O'Malley, Peter; Babbush, Ryan; Kivlichan, Ian; Romero, Jhonathan; McClean, Jarrod; Tranter, Andrew; Barends, Rami; Kelly, Julian; Chen, Yu; Chen, Zijun; Jeffrey, Evan; Fowler, Austin; Megrant, Anthony; Mutus, Josh; Neill, Charles; Quintana, Christopher; Roushan, Pedram; Sank, Daniel; Vainsencher, Amit; Wenner, James; White, Theodore; Love, Peter; Aspuru-Guzik, Alan; Neven, Hartmut; Martinis, John

    Quantum simulations of molecules have the potential to calculate industrially-important chemical parameters beyond the reach of classical methods with relatively modest quantum resources. Recent years have seen dramatic progress both superconducting qubits and quantum chemistry algorithms. Here, we present experimental demonstrations of two fully-scalable algorithms for finding the dissociation energy of hydrogen: the variational quantum eigensolver and iterative phase estimation. This represents the first calculation of a dissociation energy to chemical accuracy with a non-precompiled algorithm. These results show the promise of chemistry as the ``killer app'' for quantum computers, even before the advent of full error-correction.

  12. Three-dimensional Finite Element Formulation and Scalable Domain Decomposition for High Fidelity Rotor Dynamic Analysis

    NASA Technical Reports Server (NTRS)

    Datta, Anubhav; Johnson, Wayne R.

    2009-01-01

    This paper has two objectives. The first objective is to formulate a 3-dimensional Finite Element Model for the dynamic analysis of helicopter rotor blades. The second objective is to implement and analyze a dual-primal iterative substructuring based Krylov solver, that is parallel and scalable, for the solution of the 3-D FEM analysis. The numerical and parallel scalability of the solver is studied using two prototype problems - one for ideal hover (symmetric) and one for a transient forward flight (non-symmetric) - both carried out on up to 48 processors. In both hover and forward flight conditions, a perfect linear speed-up is observed, for a given problem size, up to the point of substructure optimality. Substructure optimality and the linear parallel speed-up range are both shown to depend on the problem size as well as on the selection of the coarse problem. With a larger problem size, linear speed-up is restored up to the new substructure optimality. The solver also scales with problem size - even though this conclusion is premature given the small prototype grids considered in this study.

  13. A scalable strategy for high-throughput GFP tagging of endogenous human proteins

    PubMed Central

    Leonetti, Manuel D.; Sekine, Sayaka; Kamiyama, Daichi; Weissman, Jonathan S.; Huang, Bo

    2016-01-01

    A central challenge of the postgenomic era is to comprehensively characterize the cellular role of the ∼20,000 proteins encoded in the human genome. To systematically study protein function in a native cellular background, libraries of human cell lines expressing proteins tagged with a functional sequence at their endogenous loci would be very valuable. Here, using electroporation of Cas9 nuclease/single-guide RNA ribonucleoproteins and taking advantage of a split-GFP system, we describe a scalable method for the robust, scarless, and specific tagging of endogenous human genes with GFP. Our approach requires no molecular cloning and allows a large number of cell lines to be processed in parallel. We demonstrate the scalability of our method by targeting 48 human genes and show that the resulting GFP fluorescence correlates with protein expression levels. We next present how our protocols can be easily adapted for the tagging of a given target with GFP repeats, critically enabling the study of low-abundance proteins. Finally, we show that our GFP tagging approach allows the biochemical isolation of native protein complexes for proteomic studies. Taken together, our results pave the way for the large-scale generation of endogenously tagged human cell lines for the proteome-wide analysis of protein localization and interaction networks in a native cellular context. PMID:27274053

  14. Developing Defined and Scalable 3D Culture Systems for Culturing Human Pluripotent Stem Cells at High Densities.

    PubMed

    Lei, Yuguo; Jeong, Daeun; Xiao, Jifang; Schaffer, David V

    2014-06-01

    Human pluripotent stem cells (hPSCs) - including embryonic stem cells (hESCs) and induced pluripotent stem cells (hiPSCs) - are very promising candidates for cell therapies, tissue engineering, high throughput pharmacology screens, and toxicity testing. These applications require large numbers of high quality cells; however, scalable production of human pluripotent stem cells and their derivatives at a high density and under well-defined conditions has been a challenge. We recently reported a simple, efficient, fully defined, scalable, and good manufacturing practice (GMP) compatible 3D culture system based on a thermoreversible hydrogel for hPSC expansion and differentiation. Here, we describe additional design rationale and characterization of this system. For instance, we have determined that culturing hPSCs as a suspension in a liquid medium can exhibit lower volumetric yields due to cell agglomeration and possible shear force-induced cell loss. By contrast, using hydrogels as 3D scaffolds for culturing hPSCs reduces aggregation and may insulate from shear forces. Additionally, hydrogel-based 3D culture systems can support efficient hPSC expansion and differentiation at a high density if compatible with hPSC biology. Finally, there are considerable opportunities for future development to further enhance hydrogel-based 3D culture systems for producing hPSCs and their progeny.

  15. Scalable coherent interface

    SciTech Connect

    Alnaes, K.; Kristiansen, E.H. ); Gustavson, D.B. ); James, D.V. )

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs.

  16. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    SciTech Connect

    Karthik, Rajasekar

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  17. High power impulse magnetron sputtering and related discharges: scalable plasma sources for plasma-based ion implantation and deposition

    SciTech Connect

    Anders, Andre

    2009-09-01

    High power impulse magnetron sputtering (HIPIMS) and related self-sputtering techniques are reviewed from a viewpoint of plasma-based ion implantation and deposition (PBII&D). HIPIMS combines the classical, scalable sputtering technology with pulsed power, which is an elegant way of ionizing the sputtered atoms. Related approaches, such as sustained self-sputtering, are also considered. The resulting intense flux of ions to the substrate consists of a mixture of metal and gas ions when using a process gas, or of metal ions only when using `gasless? or pure self-sputtering. In many respects, processing with HIPIMS plasmas is similar to processing with filtered cathodic arc plasmas, though the former is easier to scale to large areas. Both ion implantation and etching (high bias voltage, without deposition) and thin film deposition (low bias, or bias of low duty cycle) have been demonstrated.

  18. Parallel grid library with adaptive mesh refinement for development of highly scalable simulations

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2012-04-01

    As the single CPU core performance is saturating while the number of cores in the fastest supercomputers increases exponentially, the parallel performance of simulations on distributed memory machines is crucial. At the same time, utilizing efficiently the large number of available cores presents a challenge, especially in simulations with run-time adaptive mesh refinement. We have developed a generic grid library (dccrg) aimed at finite volume simulations that is easy to use and scales well up to tens of thousands of cores. The grid has several attractive features: It 1) allows an arbitrary C++ class or structure to be used as cell data; 2) provides a simple interface for adaptive mesh refinement during a simulation; 3) encapsulates the details of MPI communication when updating the data of neighboring cells between processes; and 4) provides a simple interface to run-time load balancing, e.g. domain decomposition, through the Zoltan library. Dccrg is freely available for anyone to use, study and modify under the GNU Lesser General Public License v3. We will present the implementation of dccrg, simple and advanced usage examples and scalability results on various supercomputers and problems.

  19. Integrated Scalable Parallel Firewall and Intrusion Detection System for High-Speed Networks

    SciTech Connect

    Fulp, Errin W; Anderson, Robert E; Ahn, David K

    2009-08-31

    This project developed a new scalable network firewall and Intrusion Protection System (IPS) that can manage increasing traffic loads, higher network speeds, and strict Quality of Service (QoS) requirements. This new approach provides a strong foundation for next-generation network security technologies and products that address growing and unmet needs in the government and corporate sectors by delivering Optimal Network Security. Controlling access is an essential task for securing networks that are vital to private industry, government agencies, and the military. This access can be granted or denied based on the packet header or payload contents. For example, a simple network firewall enforces a security policy by inspecting and filtering the packet headers. As a complement to the firewall, an Intrusion Detection System (IDS) inspects the packet payload for known threat signatures; for example, virus or worm. Similar to a firewall policy, IDS policies consist of multiple rules that specify an action for matching packets. Each rule can specify different items, such as the signature contents and the signature location within the payload. When the firewall and IDS are merged into one device, the resulting system is referred to as an Intrusion Protection System (IPS), which provides both packet header and payload inspections. Having both types of inspections is very desirable and more manageable in a single device.

  20. SYMNET: an optical interconnection network for scalable high-performance symmetric multiprocessors.

    PubMed

    Louri, Ahmed; Kodi, Avinash Karanth

    2003-06-10

    We address the primary limitation of the bandwidth to satisfy the demands for address transactions in future cache-coherent symmetric multiprocessors (SMPs). It is widely known that the bus speed and the coherence overhead limit the snoop/address bandwidth needed to broadcast address transactions to all processors. As a solution, we propose a scalable address subnetwork called symmetric multiprocessor network (SYMNET) in which address requests and snoop responses of SMPs are implemented optically. SYMNET not only has the ability to pipeline address requests, but also multiple address requests from different processors can propagate through the address subnetwork simultaneously. This is in contrast with all electrical bus-based SMPs, where only a single request is broadcast on the physical address bus at any given point in time. The simultaneous propagation of multiple address requests in SYMNET increases the available address bandwidth and lowers the latency of the network, but the preservation of cache coherence can no longer be maintained with the usual fast snooping protocols. A modified snooping cache-coherence protocol, coherence in SYMNET (COSYM) is introduced to solve the coherence problem. We evaluated SYMNET with a subset of Splash-2 benchmarks and compared it with the electrical bus-based MOESI (modified, owned, exclusive, shared, invalid) protocol. Our simulation studies have shown a 5-66% improvement in execution time for COSYM as compared with MOESI for various applications. Simulations have also shown that the average latency for a transaction to complete by use of COSYM protocol was 5-78% better than the MOESI protocol. SYMNET can scale up to hundreds of processors while still using fast snooping-based cache-coherence protocols, and additional performance gains may be attained with further improvement in optical device technology.

  1. Neurogaming Technology Meets Neuroscience Education: A Cost-Effective, Scalable, and Highly Portable Undergraduate Teaching Laboratory for Neuroscience.

    PubMed

    de Wit, Bianca; Badcock, Nicholas A; Grootswagers, Tijl; Hardwick, Katherine; Teichmann, Lina; Wehrman, Jordan; Williams, Mark; Kaplan, David Michael

    2017-01-01

    Active research-driven approaches that successfully incorporate new technology are known to catalyze student learning. Yet achieving these objectives in neuroscience education is especially challenging due to the prohibitive costs and technical demands of research-grade equipment. Here we describe a method that circumvents these factors by leveraging consumer EEG-based neurogaming technology to create an affordable, scalable, and highly portable teaching laboratory for undergraduate courses in neuroscience. This laboratory is designed to give students hands-on research experience, consolidate their understanding of key neuroscience concepts, and provide a unique real-time window into the working brain. Survey results demonstrate that students found the lab sessions engaging. Students also reported the labs enhanced their knowledge about EEG, their course material, and neuroscience research in general.

  2. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  3. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    NASA Astrophysics Data System (ADS)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  4. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our

  5. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF

  6. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  7. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  8. High-flux ionic diodes, ionic transistors and ionic amplifiers based on external ion concentration polarization by an ion exchange membrane: a new scalable ionic circuit platform.

    PubMed

    Sun, Gongchen; Senapati, Satyajyoti; Chang, Hsueh-Chia

    2016-04-07

    A microfluidic ion exchange membrane hybrid chip is fabricated using polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (>100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems.

  9. High-flux ionic diodes, ionic transistors and ionic amplifiers based on external ion concentration polarization by an ion exchange membrane: a new scalable ionic circuit platform†

    PubMed Central

    Sun, Gongchen; Senapati, Satyajyoti

    2016-01-01

    A microfluidic-ion exchange membrane hybrid chip is fabricated by polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (> 100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems. PMID:26960551

  10. Scalable Work Stealing

    SciTech Connect

    Dinan, James S.; Larkins, D. B.; Sadayappan, Ponnuswamy; Krishnamoorthy, Sriram; Nieplocha, Jaroslaw

    2009-11-14

    Irregular and dynamic parallel applications pose significant challenges to achieving scalable performance on large-scale multicore clusters. These applications often require ongoing, dynamic load balancing in order to maintain efficiency. While effective at small scale, centralized load balancing schemes quickly become a bottleneck on large-scale clusters. Work stealing is a popular approach to distributed dynamic load balancing; however its performance on large-scale clusters is not well understood. Prior work on work stealing has largely focused on shared memory machines. In this work we investigate the design and scalability of work stealing on modern distributed memory systems. We demonstrate high efficiency and low overhead when scaling to 8,192 processors for three benchmark codes: a producer-consumer benchmark, the unbalanced tree search benchmark, and a multiresolution analysis kernel.

  11. Chemical Vapor-Deposited Hexagonal Boron Nitride as a Scalable Template for High-Performance Organic Field-Effect Transistors

    DOE PAGES

    Lee, Tae Hoon; Kim, Kwanpyo; Kim, Gwangwoo; ...

    2017-02-27

    Organic field-effect transistors have attracted much attention because of their potential use in low-cost, large-area, flexible electronics. High-performance organic transistors require a low density of grain boundaries in their organic films and a decrease in the charge trap density at the semiconductor–dielectric interface for efficient charge transport. In this respect, the role of the dielectric material is crucial because it primarily determines the growth of the film and the interfacial trap density. Here, we demonstrate the use of chemical vapor-deposited hexagonal boron nitride (CVD h-BN) as a scalable growth template/dielectric for high-performance organic field-effect transistors. The field-effect transistors based onmore » C60 films grown on single-layer CVD h-BN exhibit an average mobility of 1.7 cm2 V–1 s–1 and a maximal mobility of 2.9 cm2 V–1 s–1 with on/off ratios of 107. The structural and morphology analysis shows that the epitaxial, two-dimensional growth of C60 on CVD h-BN is mainly responsible for the superior charge transport behavior. In conclusion, we believe that CVD h-BN can serve as a growth template for various organic semiconductors, allowing the development of large-area, high-performance flexible electronics.« less

  12. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    DOE PAGES

    Xu, Feng; Ge, Binghui; Chen, Jing; ...

    2016-03-30

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). But, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. Furthermore, we demonstrate thatmore » few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ~1.25 × 104 s-1. Our results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs.« less

  13. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    SciTech Connect

    Xu, Feng; Ge, Binghui; Chen, Jing; Nathan, Arokia; Xin, Linhuo L.; Ma, Hongyu; Zhu, Chongyang; Xia, Weiwei; Li, Zhengrui; Li, Shengli; Yu, Kaihao; Wu, Lijun; Cui, Yiping; Sun, Litao; Zhu, Yimei

    2016-03-30

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). But, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. Furthermore, we demonstrate that few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ~1.25 × 104 s-1. Our results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs.

  14. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Ge, Binghui; Chen, Jing; Nathan, Arokia; Xin, Linhuo L.; Ma, Hongyu; Min, Huihua; Zhu, Chongyang; Xia, Weiwei; Li, Zhengrui; Li, Shengli; Yu, Kaihao; Wu, Lijun; Cui, Yiping; Sun, Litao; Zhu, Yimei

    2016-06-01

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). However, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. We further demonstrate that few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ˜1.25 × 104 s-1. The results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs.

  15. High Yield and Scalable Fabrication of Nano/Bio Hybrid Graphene Field Effect Transistors for Cancer Biomarker Detection

    NASA Astrophysics Data System (ADS)

    Ducos, Pedro; Diaz, Madeline; Robinson, Matthew; Johnson, A. T. Charlie

    2015-03-01

    Graphene field effect transistors (GFETs) hold tremendous promise for use as biosensor transduction elements due to graphene's high mobility, low noise and all-surface structure with every atom exposed to the environment. We developed a GFET array fabrication based on two approaches, pre-patterned transfer and post-transfer photolithography. Both approaches are scalable, high yield, and electrically stable. Functional groups for protein immobilization were added to the GFET using various bi-functional pyrene-based linkers. One approach immobilized an azide engineered protein through a ``Staudinger Reaction'' chemistry with NHS-phosphine reacting with a 1-aminopyrene linker. Another approach bound an engineered antibody via 1-pyrene butanoic acid succinimidyl ester, where an amine group of the antibody reacts to the succinimide of the linker. GFETs were studied by Raman spectroscopy, AFM and current-gate voltage (I-Vg) characterization at several steps of the fabrication process. A sensing response was obtained for a breast cancer biomarker (HER2) as a function of target concentration. We have started to design multiplexed sensor arrays by adding several functional groups to GFETs on a single chip. Simultaneous detection with these devices will be discussed.

  16. Highly Disordered Array of Silicon Nanowires: an Effective and Scalable Approach for Performing and Flexible Electrochemical Biosensors.

    PubMed

    Maiolo, Luca; Polese, Davide; Pecora, Alessandro; Fortunato, Guglielmo; Shacham-Diamand, Yosi; Convertino, Annalisa

    2016-03-09

    The direct integration of disordered arranged and randomly oriented silicon nanowires (SiNWs) into ultraflexible and transferable electronic circuits for electrochemical biosensing applications is proposed. The working electrode (WE) of a three-electrode impedance device, fabricated on a polyimide (PI) film, is modified with SiNWs covered by a thin Au layer and functionalized to bind the sensing element. The biosensing behavior is investigated through the ligand-receptor binding of biotin-avidin system. Impedance measurements show a very efficient detection of the avidin over a broad range of concentrations from hundreds of micromolar down to the picomolar values. The impedance response is modeled through a simple equivalent circuit, which takes into account the unique WE morphology and its modification with successive layers of biomolecules. This approach of exploiting highly disordered SiNW ensemble in biosensing proves to be very promising for the following three main reasons: first, the system morphology allows high sensing performance; second, these nanostructures can be built via scalable and transferable fabrication methodology allowing an easy integration on non-conventional substrates; third, reliable modeling of the sensing response can be developed by considering the morphological and surface characteristics over an ensemble of disordered NWs rather than over individual NWs. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A new class of doped nanobulk high-figure-of-merit thermoelectrics by scalable bottom-up assembly.

    PubMed

    Mehta, Rutvik J; Zhang, Yanliang; Karthik, Chinnathambi; Singh, Binay; Siegel, Richard W; Borca-Tasciuc, Theodorian; Ramanath, Ganpati

    2012-01-10

    Obtaining thermoelectric materials with high figure of merit ZT is an exacting challenge because it requires the independent control of electrical conductivity, thermal conductivity and Seebeck coefficient, which are often unfavourably coupled. Recent works have devised strategies based on nanostructuring and alloying to address this challenge in thin films, and to obtain bulk p-type alloys with ZT>1. Here, we demonstrate a new class of both p- and n-type bulk nanomaterials with room-temperature ZT as high as 1.1 using a combination of sub-atomic-per-cent doping and nanostructuring. Our nanomaterials were fabricated by bottom-up assembly of sulphur-doped pnictogen chalcogenide nanoplates sculpted by a scalable microwave-stimulated wet-chemical method. Bulk nanomaterials from single-component assemblies or nanoplate mixtures of different materials exhibit 25-250% higher ZT than their non-nanostructured bulk counterparts and state-of-the-art alloys. Adapting our synthesis and assembly approach should enable nanobulk thermoelectrics with further increases in ZT for transforming thermoelectric refrigeration and power harvesting technologies.

  18. A peripheral component interconnect express-based scalable and highly integrated pulsed spectrometer for solution state dynamic nuclear polarization

    SciTech Connect

    He, Yugui; Liu, Chaoyang; Feng, Jiwen; Wang, Dong; Chen, Fang; Liu, Maili; Zhang, Zhi; Wang, Chao

    2015-08-15

    High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with high data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately −170 for {sup 1}H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo {sup 1}H MRI at 0.35 T.

  19. A peripheral component interconnect express-based scalable and highly integrated pulsed spectrometer for solution state dynamic nuclear polarization

    NASA Astrophysics Data System (ADS)

    He, Yugui; Feng, Jiwen; Zhang, Zhi; Wang, Chao; Wang, Dong; Chen, Fang; Liu, Maili; Liu, Chaoyang

    2015-08-01

    High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with high data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately -170 for 1H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo 1H MRI at 0.35 T.

  20. Scalable fabrication of high purity diamond nanocrystals with long-spin-coherence nitrogen vacancy centers.

    PubMed

    Trusheim, Matthew E; Li, Luozhou; Laraoui, Abdelghani; Chen, Edward H; Bakhru, Hassaram; Schröder, Tim; Gaathon, Ophir; Meriles, Carlos A; Englund, Dirk

    2014-01-08

    The combination of long spin coherence time and nanoscale size has made nitrogen vacancy (NV) centers in nanodiamonds the subject of much interest for quantum information and sensing applications. However, currently available high-pressure high-temperature (HPHT) nanodiamonds have a high concentration of paramagnetic impurities that limit their spin coherence time to the order of microseconds, less than 1% of that observed in bulk diamond. In this work, we use a porous metal mask and a reactive ion etching process to fabricate nanocrystals from high-purity chemical vapor deposition (CVD) diamond. We show that NV centers in these CVD nanodiamonds exhibit record-long spin coherence times in excess of 200 μs, enabling magnetic field sensitivities of 290 nT Hz(-1/2) with the spatial resolution characteristic of a 50 nm diameter probe.

  1. Scalable gene synthesis by selective amplification of DNA pools from high-fidelity microchips.

    PubMed

    Kosuri, Sriram; Eroshenko, Nikolai; Leproust, Emily M; Super, Michael; Way, Jeffrey; Li, Jin Billy; Church, George M

    2010-12-01

    Development of cheap, high-throughput and reliable gene synthesis methods will broadly stimulate progress in biology and biotechnology. Currently, the reliance on column-synthesized oligonucleotides as a source of DNA limits further cost reductions in gene synthesis. Oligonucleotides from DNA microchips can reduce costs by at least an order of magnitude, yet efforts to scale their use have been largely unsuccessful owing to the high error rates and complexity of the oligonucleotide mixtures. Here we use high-fidelity DNA microchips, selective oligonucleotide pool amplification, optimized gene assembly protocols and enzymatic error correction to develop a method for highly parallel gene synthesis. We tested our approach by assembling 47 genes, including 42 challenging therapeutic antibody sequences, encoding a total of ∼35 kilobase pairs of DNA. These assemblies were performed from a complex background containing 13,000 oligonucleotides encoding ∼2.5 megabases of DNA, which is at least 50 times larger than in previously published attempts.

  2. High-Speed Scalable Silicon-MoS2 P-N Heterojunction Photodetectors

    PubMed Central

    Dhyani, Veerendra; Das, Samaresh

    2017-01-01

    Two-dimensional molybdenum disulfide (MoS2) is a promising material for ultrasensitive photodetector owing to its favourable band gap and high absorption coefficient. However, their commercial applications are limited by the lack of high quality p-n junction and large wafer scale fabrication process. A high speed Si/MoS2 p-n heterojunction photodetector with simple and CMOS compatible approach has been reported here. The large area MoS2 thin film on silicon platform has been synthesized by sulfurization of RF-sputtered MoO3 films. The fabricated molecular layers of MoS2 on silicon offers high responsivity up to 8.75 A/W (at 580 nm and 3 V bias) with ultra-fast response of 10 μsec (rise time). Transient measurements of Si/MoS2 heterojunction under the modulated light reveal that the devices can function up to 50 kHz. The Si/MoS2 heterojunction is found to be sensitive to broadband wavelengths ranging from visible to near-infrared light with maximum detectivity up to ≈1.4 × 1012 Jones (2 V bias). Reproducible low dark current and high responsivity from over 20 devices in the same wafer has been measured. Additionally, the MoS2/Si photodetectors exhibit excellent stability in ambient atmosphere. PMID:28281652

  3. High-Speed Scalable Silicon-MoS2 P-N Heterojunction Photodetectors

    NASA Astrophysics Data System (ADS)

    Dhyani, Veerendra; Das, Samaresh

    2017-03-01

    Two-dimensional molybdenum disulfide (MoS2) is a promising material for ultrasensitive photodetector owing to its favourable band gap and high absorption coefficient. However, their commercial applications are limited by the lack of high quality p-n junction and large wafer scale fabrication process. A high speed Si/MoS2 p-n heterojunction photodetector with simple and CMOS compatible approach has been reported here. The large area MoS2 thin film on silicon platform has been synthesized by sulfurization of RF-sputtered MoO3 films. The fabricated molecular layers of MoS2 on silicon offers high responsivity up to 8.75 A/W (at 580 nm and 3 V bias) with ultra-fast response of 10 μsec (rise time). Transient measurements of Si/MoS2 heterojunction under the modulated light reveal that the devices can function up to 50 kHz. The Si/MoS2 heterojunction is found to be sensitive to broadband wavelengths ranging from visible to near-infrared light with maximum detectivity up to ≈1.4 × 1012 Jones (2 V bias). Reproducible low dark current and high responsivity from over 20 devices in the same wafer has been measured. Additionally, the MoS2/Si photodetectors exhibit excellent stability in ambient atmosphere.

  4. Multicatalytic colloids with highly scalable, adjustable, and stable functionalities in organic and aqueous media

    NASA Astrophysics Data System (ADS)

    Kim, Donghee; Cheong, Sanghyuk; Ahn, Yun Gyong; Ryu, Sook Won; Kim, Jai-Kyeong; Cho, Jinhan

    2016-03-01

    Despite a large number of developments of noble metal (or metal oxide) NP-based catalysts, it has been a great challenge to prepare high-performance recyclable catalysts with integrated functionalities that can be used in various solvent media. Here, we report on layer-by-layer (LbL) assembled multicatalysts with high catalytic performance, showing high dispersion and recycling stability in organic and aqueous media. The remarkable advantages of our approach are as follows. (i) Various metal or metal oxide NPs with desired catalytic performance can be easily incorporated into multilayered shells, forming densely packed arrays that allow one colloid to be used as a multicatalyst with highly integrated and controllable catalytic properties. (ii) Additionally, the dispersion stability of catalytic colloids in a desired solvent can be determined by the type of ultrathin outermost layer coating each colloid. (iii) Lastly, the covalent bonding between inorganic NPs and dendrimers within multilayer shells enhances the recycling stability of multicatalytic colloids. The resulting core-shell colloids including OA-Fe3O4 NPs, TOABr-Pd NPs, and OA-TiO2 NPs exhibited excellent performance in the oxidation of 3,3',5,5'-tetramethylbenzidine (TMB) and photocatalysis in aqueous media and in the Sonogashira coupling reaction (99% yield) in organic media. Given that the catalytic properties of recyclable colloids reported to date have entirely depended on the functionality of a single catalytic NP layer deposited onto colloids in selective solvent media, our approach provides a basis for the design and exploitation of high-performance recyclable colloids with integrated multicatalytic properties and high dispersion stability in a variety of solvents.Despite a large number of developments of noble metal (or metal oxide) NP-based catalysts, it has been a great challenge to prepare high-performance recyclable catalysts with integrated functionalities that can be used in various solvent

  5. Scalable preparation and characterization of GaN nanopowders with high crystallinity by soluble salts-assisted route

    NASA Astrophysics Data System (ADS)

    Lv, Yingying; Yu, Leshu; Ai, Wenwen; Li, Chungen

    2014-11-01

    By using Na3PO4 as a dispersant, soluble salt-assisted route has been further developed to prepare high-crystalline GaN nanoparticles powder on a large scale through the direct nitridation of Ga-Na3PO4 mixture at 750-950 °C and followed by washing with water. The systematical characterizations including XRD, Raman, IR, TEM, XPS, and PL spectrum showed that the as-prepared nanopowders were composed of nanoparticles in diameters of 8-18 nm, hexagonal phase, pure GaN, and had a broad UV centered at 388 nm and blue emissions band centered at around 547 nm. Because of the utilization of the simple reaction between metallic Ga and NH3, the preparation of pure GaN nanopowders becomes very easy, economical, and scalable, suggesting broad application in optoelectronic device material. The interesting results indicate the wide range of soluble salt-assisted route for promising industrial production of GaN nanopowders.

  6. Scalable preparation of porous micron-SnO2/C composites as high performance anode material for lithium ion battery

    NASA Astrophysics Data System (ADS)

    Wang, Ming-Shan; Lei, Ming; Wang, Zhi-Qiang; Zhao, Xing; Xu, Jun; Yang, Wei; Huang, Yun; Li, Xing

    2016-03-01

    Nano tin dioxide-carbon (SnO2/C) composites prepared by various carbon materials, such as carbon nanotubes, porous carbon, and graphene, have attracted extensive attention in wide fields. However, undesirable concerns of nanoparticles, including in higher surface area, low tap density, and self-agglomeration, greatly restricted their large-scale practical applications. In this study, novel porous micron-SnO2/C (p-SnO2/C) composites are scalable prepared by a simple hydrothermal approach using glucose as a carbon source and Pluronic F127 as a pore forming agent/soft template. The SnO2 nanoparticles were homogeneously dispersed in micron carbon spheres by assembly with F127/glucose. The continuous three-dimensional porous carbon networks have effectively provided strain relaxation for SnO2 volume expansion/shrinkage during lithium insertion/extraction. In addition, the carbon matrix could largely minimize the direct exposure of SnO2 to the electrolyte, thus ensure formation of stable solid electrolyte interface films. Moreover, the porous structure could also create efficient channels for the fast transport of lithium ions. As a consequence, the p-SnO2/C composites exhibit stable cycle performance, such as a high capacity retention of over 96% for 100 cycles at a current density of 200 mA g-1 and a long cycle life up to 800 times at a higher current density of 1000 mA g-1.

  7. Scalable and Facile Preparation of Highly Stretchable Electrospun PEDOT:PSS@PU Fibrous Nonwovens toward Wearable Conductive Textile Applications.

    PubMed

    Ding, Yichun; Xu, Wenhui; Wang, Wenyu; Fong, Hao; Zhu, Zhengtao

    2017-09-06

    Flexible and stretchable conductive textiles are highly desired for potential applications in wearable electronics. This study demonstrates a scalable and facile preparation of all-organic nonwoven that is mechanically stretchable and electrically conductive. Polyurethane (PU) fibrous nonwoven is prepared via the electrospinning technique; in the following step, the electrospun PU nonwoven is dip-coated with the conducting polymer poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). This simple method enables convenient preparation of PEDOT:PSS@PU nonwovens with initial sheet resistance in the range of 35-240 Ω/sq (i.e., the electrical conductivity in the range of 30-200 S m(-1)) by varying the number of dip-coating times. The resistance change of the PEDOT:PSS@PU nonwoven under stretch is investigated. The PEDOT:PSS@PU nonwoven is first stretched and then released repeatedly under certain strain (denoted as prestretching strain); the resistance of PEDOT:PSS@PU nonwoven becomes constant after the irreversible change for the first 10 stretch-release cycles. Thereafter, the resistance of the nonwoven does not vary appreciably under stretch as long as the strain is within the prestretching strain. Therefore, the PEDOT:PSS@PU nonwoven can be used as a stretchable conductor within the prestretching strain. Circuits using sheet and twisted yarn of the nonwovens as electric conductors are demonstrated.

  8. A Scalable Gene Synthesis Platform Using High-Fidelity DNA Microchips

    PubMed Central

    Kosuri, Sriram; Eroshenko, Nikolai; LeProust, Emily; Super, Michael; Way, Jeffrey; Li, Jin Billy; Church, George M.

    2010-01-01

    Development of cheap, high-throughput, and reliable gene synthesis methods will broadly stimulate progress in biology and biotechnology1. Currently, the reliance on column-synthesized oligonucleotides as a source of DNA limits further cost reductions in gene synthesis2. Oligonucleotides from DNA microchips can reduce costs by at least an order of magnitude3,4,5, yet efforts to scale their use have been largely unsuccessful due to the high error rates and complexity of the oligonucleotide mixtures. Here we use high-fidelity DNA microchips, selective oligonucleotide pool amplification, optimized gene assembly protocols, and enzymatic error correction to develop a highly parallel gene synthesis platform. We tested our platform by assembling 47 genes, including 42 challenging therapeutic antibody sequences, encoding a total of ~35 kilo-basepairs of DNA. These assemblies were performed from a complex background containing 13,000 oligonucleotides encoding ~2.5 megabases of DNA, which is at least 50 times larger than previously published attempts. PMID:21113165

  9. Scalable fabrication of high-performance and flexible graphene strain sensors

    NASA Astrophysics Data System (ADS)

    Tian, He; Shu, Yi; Cui, Ya-Long; Mi, Wen-Tian; Yang, Yi; Xie, Dan; Ren, Tian-Ling

    2013-12-01

    Graphene strain sensors have promising prospects of applications in detecting human motion. However, the shortage of graphene growth and patterning techniques has become a challenging issue hindering the application of graphene strain sensors. Therefore, we propose wafer-scale flexible strain sensors with high-performance, which can be fabricated in one-step laser scribing. The graphene films could be obtained by directly reducing graphene oxide film in a Light-Scribe DVD burner. The gauge factor (GF) of the graphene strain sensor (10 mm × 10 mm square) is 0.11. In order to enhance the GF further, graphene micro-ribbons (20 μm width, 0.6 mm long) has been used as strain sensors, of which the GF is up to 9.49. The devices may conform to various application requirements, such as high GF for low-strain applications and low GF for high deformation applications. The work indicates that laser scribed flexible graphene strain sensors could be widely used in medical-sensing, bio-sensing, artificial skin and many other areas.Graphene strain sensors have promising prospects of applications in detecting human motion. However, the shortage of graphene growth and patterning techniques has become a challenging issue hindering the application of graphene strain sensors. Therefore, we propose wafer-scale flexible strain sensors with high-performance, which can be fabricated in one-step laser scribing. The graphene films could be obtained by directly reducing graphene oxide film in a Light-Scribe DVD burner. The gauge factor (GF) of the graphene strain sensor (10 mm × 10 mm square) is 0.11. In order to enhance the GF further, graphene micro-ribbons (20 μm width, 0.6 mm long) has been used as strain sensors, of which the GF is up to 9.49. The devices may conform to various application requirements, such as high GF for low-strain applications and low GF for high deformation applications. The work indicates that laser scribed flexible graphene strain sensors could be widely used

  10. High-performance hollow sulfur nanostructured battery cathode through a scalable, room temperature, one-step, bottom-up approach

    PubMed Central

    Li, Weiyang; Zheng, Guangyuan; Yang, Yuan; Seh, Zhi Wei; Liu, Nian; Cui, Yi

    2013-01-01

    Sulfur is an exciting cathode material with high specific capacity of 1,673 mAh/g, more than five times the theoretical limits of its transition metal oxides counterpart. However, successful applications of sulfur cathode have been impeded by rapid capacity fading caused by multiple mechanisms, including large volume expansion during lithiation, dissolution of intermediate polysulfides, and low ionic/electronic conductivity. Tackling the sulfur cathode problems requires a multifaceted approach, which can simultaneously address the challenges mentioned above. Herein, we present a scalable, room temperature, one-step, bottom-up approach to fabricate monodisperse polymer (polyvinylpyrrolidone)-encapsulated hollow sulfur nanospheres for sulfur cathode, allowing unprecedented control over electrode design from nanoscale to macroscale. We demonstrate high specific discharge capacities at different current rates (1,179, 1,018, and 990 mAh/g at C/10, C/5, and C/2, respectively) and excellent capacity retention of 77.6% (at C/5) and 73.4% (at C/2) after 300 and 500 cycles, respectively. Over a long-term cycling of 1,000 cycles at C/2, a capacity decay as low as 0.046% per cycle and an average coulombic efficiency of 98.5% was achieved. In addition, a simple modification on the sulfur nanosphere surface with a layer of conducting polymer, poly(3,4-ethylenedioxythiophene), allows the sulfur cathode to achieve excellent high-rate capability, showing a high reversible capacity of 849 and 610 mAh/g at 2C and 4C, respectively. PMID:23589875

  11. Scalable synthesis of Fe₃O₄ nanoparticles anchored on graphene as a high-performance anode for lithium ion batteries

    SciTech Connect

    Dong, Yu Cheng; Ma, Ru Guang; Jun Hu, Ming; Cheng, Hua; Tsang, Chun Kwan; Yang, Qing Dan; Yang Li, Yang; Zapien, Juan Antonio

    2013-05-01

    We report a scalable strategy to synthesize Fe₃O₄/graphene nanocomposites as a high-performance anode material for lithium ion batteries. In this study, ferric citrate is used as precursor to prepare Fe₃O₄ nanoparticles without introducing additional reducing agent; furthermore and show that such Fe₃O₄ nanoparticles can be anchored on graphene sheets which attributed to multifunctional group effect of citrate. Electrochemical characterization of the Fe₃O₄/graphene nanocomposites exhibit large reversible capacity (~1347 mA h g⁻¹ at a current density of 0.2 C up to 100 cycles, and subsequent capacity of ~619 mA h g⁻¹ at a current density of 2 C up to 200 cycles), as well as high coulombic efficiency (~97%), excellent rate capability, and good cyclic stability. High resolution transmission electron microscopy confirms that Fe₃O₄ nanoparticles, with a size of ~4–16 nm are densely anchored on thin graphene sheets, resulting in large synergetic effects between Fe₃O₄ nanoparticles and graphene sheets with high electrochemical performance. - Graphical abstract: The reduction of Fe³⁺ to Fe²⁺ and the deposition of Fe₃O₄ on graphene sheets occur simultaneously using citrate function as reductant and anchor agent in this reaction process. Highlights: • Fe₃O₄/graphene composites are synthesized directly from graphene and C₆H₅FeO₇. • The citrate function as reductant and anchor agent in this reaction process. • The resulting Fe₃O₄ particles (~4–16 nm) are densely anchored on graphene sheets. • The prepared Fe₃O₄/graphene composites exhibit excellent electrochemical performance.

  12. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega

    PubMed Central

    Sievers, Fabian; Wilm, Andreas; Dineen, David; Gibson, Toby J; Karplus, Kevin; Li, Weizhong; Lopez, Rodrigo; McWilliam, Hamish; Remmert, Michael; Söding, Johannes; Thompson, Julie D; Higgins, Desmond G

    2011-01-01

    Multiple sequence alignments are fundamental to many sequence analysis methods. Most alignments are computed using the progressive alignment heuristic. These methods are starting to become a bottleneck in some analysis pipelines when faced with data sets of the size of many thousands of sequences. Some methods allow computation of larger data sets while sacrificing quality, and others produce high-quality alignments, but scale badly with the number of sequences. In this paper, we describe a new program called Clustal Omega, which can align virtually any number of protein sequences quickly and that delivers accurate alignments. The accuracy of the package on smaller test cases is similar to that of the high-quality aligners. On larger data sets, Clustal Omega outperforms other packages in terms of execution time and quality. Clustal Omega also has powerful features for adding sequences to and exploiting information in existing alignments, making use of the vast amount of precomputed information in public databases like Pfam. PMID:21988835

  13. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega.

    PubMed

    Sievers, Fabian; Wilm, Andreas; Dineen, David; Gibson, Toby J; Karplus, Kevin; Li, Weizhong; Lopez, Rodrigo; McWilliam, Hamish; Remmert, Michael; Söding, Johannes; Thompson, Julie D; Higgins, Desmond G

    2011-10-11

    Multiple sequence alignments are fundamental to many sequence analysis methods. Most alignments are computed using the progressive alignment heuristic. These methods are starting to become a bottleneck in some analysis pipelines when faced with data sets of the size of many thousands of sequences. Some methods allow computation of larger data sets while sacrificing quality, and others produce high-quality alignments, but scale badly with the number of sequences. In this paper, we describe a new program called Clustal Omega, which can align virtually any number of protein sequences quickly and that delivers accurate alignments. The accuracy of the package on smaller test cases is similar to that of the high-quality aligners. On larger data sets, Clustal Omega outperforms other packages in terms of execution time and quality. Clustal Omega also has powerful features for adding sequences to and exploiting information in existing alignments, making use of the vast amount of precomputed information in public databases like Pfam.

  14. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

    SciTech Connect

    Langston, Michael A

    2012-09-06

    This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism's responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

  15. Large enhancement of quantum dot fluorescence by highly scalable nanoporous gold.

    PubMed

    Zhang, Ling; Song, Yunke; Fujita, Takeshi; Zhang, Ye; Chen, Mingwei; Wang, Tza-Huei

    2014-02-26

    Dealloyed nanoporous gold (NPG) dramatically enhances quantum dot (QD) fluorescence by amplifying near-field excitation and increasing the radiative decay rate. Originating from plasmonic coupling, the fluorescence enhancement is highly dependent upon the nanopore size of the NPG. In contrast to other nanoengineered metallic structures, NPG exhibits fluorescence enhancement of QDs over a large substrate surface. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Scalable High-order Methods for Multi-Scale Problems: Analysis, Algorithms and Application

    DTIC Science & Technology

    2016-02-26

    the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the...simulation, domain decomposition, CFD, gappy data , estimation theory, and gap-tooth algorithm. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...smooth in time, the temporal estimation based on previous saved data can give a highly accurate result on a missing part of the solution. To accomplish

  17. Lightweight, Flexible, High-Performance Carbon Nanotube Cables Made by Scalable Flow Coating.

    PubMed

    Mirri, Francesca; Orloff, Nathan D; Forster, Aaron M; Ashkar, Rana; Headrick, Robert J; Bengio, E Amram; Long, Christian J; Choi, April; Luo, Yimin; Walker, Angela R Hight; Butler, Paul; Migler, Kalman B; Pasquali, Matteo

    2016-02-01

    Coaxial cables for data transmission are ubiquitous in telecommunications, aerospace, automotive, and robotics industries. Yet, the metals used to make commercial cables are unsuitably heavy and stiff. These undesirable traits are particularly problematic in aerospace applications, where weight is at a premium and flexibility is necessary to conform with the distributed layout of electronic components in satellites and aircraft. The cable outer conductor (OC) is usually the heaviest component of modern data cables; therefore, exchanging the conventional metallic OC for lower weight materials with comparable transmission characteristics is highly desirable. Carbon nanotubes (CNTs) have recently been proposed to replace the metal components in coaxial cables; however, signal attenuation was too high in prototypes produced so far. Here, we fabricate the OC of coaxial data cables by directly coating a solution of CNTs in chlorosulfonic acid (CSA) onto the cable inner dielectric. This coating has an electrical conductivity that is approximately 2 orders of magnitude greater than the best CNT OC reported in the literature to date. This high conductivity makes CNT coaxial cables an attractive alternative to commercial cables with a metal (tin-coated copper) OC, providing comparable cable attenuation and mechanical durability with a 97% lower component mass.

  18. Scalable synthesis of silicon-nanolayer-embedded graphite for high-energy lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Ko, Minseong; Chae, Sujong; Ma, Jiyoung; Kim, Namhyung; Lee, Hyun-Wook; Cui, Yi; Cho, Jaephil

    2016-09-01

    Existing anode technologies are approaching their limits, and silicon is recognized as a potential alternative due to its high specific capacity and abundance. However, to date the commercial use of silicon has not satisfied electrode calendering with limited binder content comparable to commercial graphite anodes for high energy density. Here we demonstrate the feasibility of a next-generation hybrid anode using silicon-nanolayer-embedded graphite/carbon. This architecture allows compatibility between silicon and natural graphite and addresses the issues of severe side reactions caused by structural failure of crumbled graphite dust and uncombined residue of silicon particles by conventional mechanical milling. This structure shows a high first-cycle Coulombic efficiency (92%) and a rapid increase of the Coulombic efficiency to 99.5% after only 6 cycles with a capacity retention of 96% after 100 cycles, with an industrial electrode density of >1.6 g cm-3, areal capacity loading of >3.3 mAh cm-2, and <4 wt% binding materials in a slurry. As a result, a full cell using LiCoO2 has demonstrated a higher energy density (1,043 Wh l-1) than with standard commercial graphite electrodes.

  19. Homogenous 96-Plex PEA Immunoassay Exhibiting High Sensitivity, Specificity, and Excellent Scalability

    PubMed Central

    Holmquist, Göran; Björkesten, Johan; Bucht Thorsen, Stine; Ekman, Daniel; Eriksson, Anna; Rennel Dickens, Emma; Ohlsson, Sandra; Edfeldt, Gabriella; Andersson, Ann-Catrin; Lindstedt, Patrik; Stenvang, Jan; Gullberg, Mats; Fredriksson, Simon

    2014-01-01

    Medical research is developing an ever greater need for comprehensive high-quality data generation to realize the promises of personalized health care based on molecular biomarkers. The nucleic acid proximity-based methods proximity ligation and proximity extension assays have, with their dual reporters, shown potential to relieve the shortcomings of antibodies and their inherent cross-reactivity in multiplex protein quantification applications. The aim of the present study was to develop a robust 96-plex immunoassay based on the proximity extension assay (PEA) for improved high throughput detection of protein biomarkers. This was enabled by: (1) a modified design leading to a reduced number of pipetting steps compared to the existing PEA protocol, as well as improved intra-assay precision; (2) a new enzymatic system that uses a hyper-thermostabile enzyme, Pwo, for uniting the two probes allowing for room temperature addition of all reagents and improved the sensitivity; (3) introduction of an inter-plate control and a new normalization procedure leading to improved inter-assay precision (reproducibility). The multiplex proximity extension assay was found to perform well in complex samples, such as serum and plasma, and also in xenografted mice and resuspended dried blood spots, consuming only 1 µL sample per test. All-in-all, the development of the current multiplex technique is a step toward robust high throughput protein marker discovery and research. PMID:24755770

  20. Generation of Scalable, Metallic High-Aspect Ratio Nanocomposites in a Biological Liquid Medium.

    PubMed

    Cotton Kelly, Kinsey; Wasserman, Jessica R; Deodhar, Sneha; Huckaby, Justin; DeCoster, Mark A

    2015-07-08

    The goal of this protocol is to describe the synthesis of two novel biocomposites with high-aspect ratio structures. The biocomposites consist of copper and cystine, with either copper nanoparticles (CNPs) or copper sulfate contributing the metallic component. Synthesis is carried out in liquid under biological conditions (37 °C) and the self-assembled composites form after 24 hr. Once formed, these composites are highly stable in both liquid media and in a dried form. The composites scale from the nano- to micro- range in length, and from a few microns to 25 nm in diameter. Field emission scanning electron microscopy with energy dispersive X-ray spectroscopy (EDX) demonstrated that sulfur was present in the NP-derived linear structures, while it was absent from the starting CNP material, thus confirming cystine as the source of sulfur in the final nanocomposites. During synthesis of these linear nano- and micro-composites, a diverse range of lengths of structures is formed in the synthesis vessel. Sonication of the liquid mixture after synthesis was demonstrated to assist in controlling average size of the structures by diminishing the average length with increased time of sonication. Since the formed structures are highly stable, do not agglomerate, and are formed in liquid phase, centrifugation may also be used to assist in concentrating and segregating formed composites.

  1. Lightweight, flexible, high-performance carbon nanotube cables made by scalable flow coating

    DOE PAGES

    Mirri, Francesca; Orloff, Nathan D.; Forser, Aaron M.; ...

    2016-01-21

    Coaxial cables for data transmission are ubiquitous in telecommunications, aerospace, automotive, and robotics industries. Yet, the metals used to make commercial cables are unsuitably heavy and stiff. These undesirable traits are particularly problematic in aerospace applications, where weight is at a premium and flexibility is necessary to conform with the distributed layout of electronic components in satellites and aircraft. The cable outer conductor (OC) is usually the heaviest component of modern data cables; therefore, exchanging the conventional metallic OC for lower weight materials with comparable transmission characteristics is highly desirable. Carbon nanotubes (CNTs) have recently been proposed to replace themore » metal components in coaxial cables; however, signal attenuation was too high in prototypes produced so far. Here, we fabricate the OC of coaxial data cables by directly coating a solution of CNTs in chlorosulfonic acid (CSA) onto the cable inner dielectric. This coating has an electrical conductivity that is approximately 2 orders of magnitude greater than the best CNT OC reported in the literature to date. In conclusion, this high conductivity makes CNT coaxial cables an attractive alternative to commercial cables with a metal (tin-coated copper) OC, providing comparable cable attenuation and mechanical durability with a 97% lower component mass.« less

  2. Lightweight, flexible, high-performance carbon nanotube cables made by scalable flow coating

    SciTech Connect

    Mirri, Francesca; Orloff, Nathan D.; Forser, Aaron M.; Ashkar, Rana; Headrick, Robert J.; Bengio, E. Amram; Long, Christian J.; Choi, April; Luo, Yimin; Hight Walker, Angela R.; Butler, Paul; Migler, Kalman B.; Pasquali, Matteo

    2016-01-21

    Coaxial cables for data transmission are ubiquitous in telecommunications, aerospace, automotive, and robotics industries. Yet, the metals used to make commercial cables are unsuitably heavy and stiff. These undesirable traits are particularly problematic in aerospace applications, where weight is at a premium and flexibility is necessary to conform with the distributed layout of electronic components in satellites and aircraft. The cable outer conductor (OC) is usually the heaviest component of modern data cables; therefore, exchanging the conventional metallic OC for lower weight materials with comparable transmission characteristics is highly desirable. Carbon nanotubes (CNTs) have recently been proposed to replace the metal components in coaxial cables; however, signal attenuation was too high in prototypes produced so far. Here, we fabricate the OC of coaxial data cables by directly coating a solution of CNTs in chlorosulfonic acid (CSA) onto the cable inner dielectric. This coating has an electrical conductivity that is approximately 2 orders of magnitude greater than the best CNT OC reported in the literature to date. In conclusion, this high conductivity makes CNT coaxial cables an attractive alternative to commercial cables with a metal (tin-coated copper) OC, providing comparable cable attenuation and mechanical durability with a 97% lower component mass.

  3. Complexity in scalable computing.

    SciTech Connect

    Rouson, Damian W. I.

    2008-12-01

    The rich history of scalable computing research owes much to a rapid rise in computing platform scale in terms of size and speed. As platforms evolve, so must algorithms and the software expressions of those algorithms. Unbridled growth in scale inevitably leads to complexity. This special issue grapples with two facets of this complexity: scalable execution and scalable development. The former results from efficient programming of novel hardware with increasing numbers of processing units (e.g., cores, processors, threads or processes). The latter results from efficient development of robust, flexible software with increasing numbers of programming units (e.g., procedures, classes, components or developers). The progression in the above two parenthetical lists goes from the lowest levels of abstraction (hardware) to the highest (people). This issue's theme encompasses this entire spectrum. The lead author of each article resides in the Scalable Computing Research and Development Department at Sandia National Laboratories in Livermore, CA. Their co-authors hail from other parts of Sandia, other national laboratories and academia. Their research sponsors include several programs within the Department of Energy's Office of Advanced Scientific Computing Research and its National Nuclear Security Administration, along with Sandia's Laboratory Directed Research and Development program and the Office of Naval Research. The breadth of interests of these authors and their customers reflects in the breadth of applications this issue covers. This article demonstrates how to obtain scalable execution on the increasingly dominant high-performance computing platform: a Linux cluster with multicore chips. The authors describe how deep memory hierarchies necessitate reducing communication overhead by using threads to exploit shared register and cache memory. On a matrix-matrix multiplication problem, they achieve up to 96% parallel efficiency with a three-part strategy: intra

  4. Organic Radical-Assisted Electrochemical Exfoliation for the Scalable Production of High-Quality Graphene.

    PubMed

    Yang, Sheng; Brüller, Sebastian; Wu, Zhong-Shuai; Liu, Zhaoyang; Parvez, Khaled; Dong, Renhao; Richard, Fanny; Samorì, Paolo; Feng, Xinliang; Müllen, Klaus

    2015-11-04

    Despite the intensive research efforts devoted to graphene fabrication over the past decade, the production of high-quality graphene on a large scale, at an affordable cost, and in a reproducible manner still represents a great challenge. Here, we report a novel method based on the controlled electrochemical exfoliation of graphite in aqueous ammonium sulfate electrolyte to produce graphene in large quantities and with outstanding quality. Because the radicals (e.g., HO(•)) generated from water electrolysis are responsible for defect formation on graphene during electrochemical exfoliation, a series of reducing agents as additives (e.g., (2,2,6,6-tetramethylpiperidin-1-yl)oxyl (TEMPO), ascorbic acid, and sodium borohydride) have been investigated to eliminate these radicals and thus control the exfoliation process. Remarkably, TEMPO-assisted exfoliation results in large graphene sheets (5-10 μm on average), which exhibit outstanding hole mobilities (∼405 cm(2) V(-1) s(-1)), very low Raman I(D)/I(G) ratios (below 0.1), and extremely high carbon to oxygen (C/O) ratios (∼25.3). Moreover, the graphene ink prepared in dimethylformamide can exhibit concentrations as high as 6 mg mL(-1), thus qualifying this material for intriguing applications such as transparent conductive films and flexible supercapacitors. In general, this robust method for electrochemical exfoliation of graphite offers great promise for the preparation of graphene that can be utilized in industrial applications to create integrated nanocomposites, conductive or mechanical additives, as well as energy storage and conversion devices.

  5. Adaptive, High-Order, and Scalable Software Elements for Dynamic Rupture Simulations in Complex Geometries

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.; Aranda, A. R.

    2014-12-01

    The goal of this work is to develop a new set of simulation tools for earthquake rupture dynamics based on state-of-the-art high-order, adaptive numerical methods capable of handling complex geometries. High-order methods are ideal for earthquake rupture simulations as the problems are wave-dominated and the waves excited in simulations propagate over distance much larger than their fundamental wavelength. When high-order methods are used for such problems significantly fewer degrees of freedom are required as compared with low-order methods. The base numerical method in our new software elements is a discontinuous Galerkin method based on curved, Kronecker product hexahedral elements. We currently use MPI for off-node parallelism and are in the process of exploring strategies for on-node parallelism. Spatial mesh adaptivity is handled using the p4est library and temporal adaptivity is achieved through an Adams-Bashforth based local time stepping method; we are presently in the process of including dynamic spatial adaptivity which we believe will be valuable for capturing the small-scale features around the propagating rupture front. One of the key features of our software elements is that the method is provably stable, even after the inclusion of the nonlinear frictions laws which govern rupture dynamics. In this presentation we will both outline the structure of the software elements as well as validate the rupture dynamics with SCEC benchmark test problems. We are also presently developing several realistic simulation geometries which may also be reported on. Finally, the software elements that we have designed are fully public domain and have been designed with tightly coupled, wave dominated multiphysics applications in mind. This latter design decisions means the software elements are applicable to many other geophysical and non-geophysical applications.

  6. Scalable Memory Registration for High-Performance Networks Using Helper Threads

    SciTech Connect

    Li, Dong; Cameron, Kirk W.; Nikolopoulos, Dimitrios; de Supinski, Bronis R.; Schulz, Martin

    2011-01-01

    Remote DMA (RDMA) enables high performance networks to reduce data copying between an application and the operating system (OS). However RDMA operations in some high performance networks require communication memory explicitly registered with the network adapter and pinned by the OS. Memory registration and pinning limits the flexibility of the memory system and reduces the amount of memory that user processes can allocate. These issues become more significant on multicore platforms, since registered memory demand grows linearly with the number of processor cores. In this paper we propose a new memory registration/deregistration strategy to reduce registered memory on multicore architectures for HPC applications. We hide the cost of dynamic memory management by offloading all dynamic memory registration and deregistration requests to a dedicated memory management helper thread. We investigate design policies and performance implications of the helper thread approach. We evaluate our framework with the NAS parallel benchmarks, for which our registration scheme significantly reduces the registered memory (23.62% on average and up to 49.39%) and avoids memory registration/deregistration costs for reused communication memory. We show that our system enables the execution of problem sizes that could not complete under existing memory registration strategies.

  7. Design of a High Resolution Scalable Cluster Based Portable Tiled Display for Earth Sciences Visualization

    NASA Astrophysics Data System (ADS)

    Nayak, A. M.; Dawe, G.; Samilo, D.; Keen, C.; Matthews, J.; Patel, A.; Im, T.; Orcutt, J.; Defanti, T.

    2006-12-01

    The Center for Earth Observations and Applications (CEOA) collaborated with researchers at the Scripps Institution of Oceanography Visualization Center and the California Institute for Telecommunications and Information Technology (Calit2) to design an advanced portable visualization system to explore geophysical and oceanography datasets at very high resolution. The system consists of 15 Dell 24" monitors arranged in a 3x5 grid ( 3 panels high and 5 wide). Each monitor supports a resolution of upto 1920 x 1200 and is driven by one node of a cluster of 15 Intel Mac Minis. The tiled display supports a total resolution of over 34 million pixels and can be used either as a single large desktop to display rendered animations, HD movies and image files or to display web-based content on each panel for simultaneous viewing of mutliple datasets. The system is enclosed in a custom built case that can hold all the required components and transported to research sites or to meetings and conferences for public awareness activities. We call the system the 'Mobile INteractive Imaging Multidisplay Environment' or simply 'miniMe'. The design of the miniMe wall is based on a class of advanced display systems called Geowall-2 developed at the Electronic Visualization Laboratory, University of Illinois at Chicago.

  8. Rapid, scalable and highly automated HLA genotyping using next-generation sequencing: a transition from research to diagnostics

    PubMed Central

    2013-01-01

    Background Human leukocyte antigen matching at allelic resolution is proven clinically significant in hematopoietic stem cell transplantation, lowering the risk of graft-versus-host disease and mortality. However, due to the ever growing HLA allele database, tissue typing laboratories face substantial challenges. In light of the complexity and the high degree of allelic diversity, it has become increasingly difficult to define the classical transplantation antigens at high-resolution by using well-tried methods. Thus, next-generation sequencing is entering into diagnostic laboratories at the perfect time and serving as a promising tool to overcome intrinsic HLA typing problems. Therefore, we have developed and validated a scalable automated HLA class I and class II typing approach suitable for diagnostic use. Results A validation panel of 173 clinical and proficiency testing samples was analysed, demonstrating 100% concordance to the reference method. From a total of 1,273 loci we were able to generate 1,241 (97.3%) initial successful typings. The mean ambiguity reduction for the analysed loci was 93.5%. Allele assignment including intronic sequences showed an improved resolution (99.2%) of non-expressed HLA alleles. Conclusion We provide a powerful HLA typing protocol offering a short turnaround time of only two days, a fully integrated workflow and most importantly a high degree of typing reliability. The presented automated assay is flexible and can be scaled by specific primer compilations and the use of different 454 sequencing systems. The workflow was successfully validated according to the policies of the European Federation for Immunogenetics. Next-generation sequencing seems to become one of the new methods in the field of Histocompatibility. PMID:23557197

  9. A Scalable and High-Yield Strategy for the Synthesis of Sequence-Defined Macromolecules.

    PubMed

    Solleder, Susanne C; Zengel, Deniz; Wetzel, Katharina S; Meier, Michael A R

    2016-01-18

    The efficient synthesis of a sequence-defined decamer, its characterization, and its straightforward dimerization through self-metathesis are described. For this purpose, a monoprotected AB monomer was designed and used to synthesize a decamer bearing ten different and selectable side chains by iterative Passerini three-component reaction (P-3CR) and subsequent deprotection. The highly efficient procedure provided excellent yields and allows for the multigram-scale synthesis of such perfectly defined macromolecules. An olefin was introduced at the end of the synthesis, allowing the self-metathesis reaction of the resulting decamer to provide a sequence-defined 20-mer with a molecular weight of 7046.40 g mol(-1). The obtained oligomers were carefully characterized by NMR and IR spectroscopy, GPC and GPC coupled to ESI-MS, and mass spectrometry (FAB and orbitrap ESI-MS). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Globally scalable generation of high-resolution land cover from multispectral imagery

    NASA Astrophysics Data System (ADS)

    Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.

    2017-05-01

    We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).

  11. Scalable, high-capacity optical switches for Internet routers and moving platforms

    NASA Astrophysics Data System (ADS)

    Joe, In-Sung

    Internet traffic nearly doubles every year, and we need faster routers with higher ports count, yet lower electrical power consumption. Current internet routers use electrical switches that consume large amounts of electrical power to operate at high data rates. These internet routers dissipate ˜ 10kW per rack, and their capacity is limited by cooling constraints. The power consumption is also critical for moving platforms. As avionics advance, the demand for larger capacity networks increases. Optical fibers are already chosen for high speed data transmission in advanced aircraft. In optical communication systems, integrated passive optical components, such as Array Waveguide Gratings (AWGs), have provided larger capacity with lower power consumption, because minimal electrical power is required for their operation. In addition, compact, wavelength-tunable semiconductor lasers with wide tuning ranges that can switch their wavelengths in tens of nanoseconds have been demonstrated. Here we present a wavelength-selective optical packet switch based on Waveguide Grating Routers (WGRs), passive splitters, and combiners. Tunable lasers on the transmitter side are the only active switching elements. The WGR is operated on multiple Free Spectral Ranges (FSRs) to achieve increased port count and switching capacity while maintaining strict-sense, non-blocking operation. Switching times of less than 24ns between two wavelengths covering three FSRs is demonstrated experimentally. The electrical power consumption, size, weight, and cost of our optical switch is compared with those of conventional electrical switches, showing substantial improvements at large throughputs (˜2 Tb/s full duplex). A revised switch design that does not suffer optical loss from star couplers is proposed. This switch design uses only WGRs, and it is suitable for networks with stringent power budgets. The burst nature of the optical packet transmission requires clock recovery for every incoming

  12. High-throughput miniaturized bioreactors for cell culture process development: reproducibility, scalability, and control.

    PubMed

    Rameez, Shahid; Mostafa, Sigma S; Miller, Christopher; Shukla, Abhinav A

    2014-01-01

    Decreasing the timeframe for cell culture process development has been a key goal toward accelerating biopharmaceutical development. Advanced Microscale Bioreactors (ambr™) is an automated micro-bioreactor system with miniature single-use bioreactors with a 10-15 mL working volume controlled by an automated workstation. This system was compared to conventional bioreactor systems in terms of its performance for the production of a monoclonal antibody in a recombinant Chinese Hamster Ovary cell line. The miniaturized bioreactor system was found to produce cell culture profiles that matched across scales to 3 L, 15 L, and 200 L stirred tank bioreactors. The processes used in this article involve complex feed formulations, perturbations, and strict process control within the design space, which are in-line with processes used for commercial scale manufacturing of biopharmaceuticals. Changes to important process parameters in ambr™ resulted in predictable cell growth, viability and titer changes, which were in good agreement to data from the conventional larger scale bioreactors. ambr™ was found to successfully reproduce variations in temperature, dissolved oxygen (DO), and pH conditions similar to the larger bioreactor systems. Additionally, the miniature bioreactors were found to react well to perturbations in pH and DO through adjustments to the Proportional and Integral control loop. The data presented here demonstrates the utility of the ambr™ system as a high throughput system for cell culture process development. © 2014 American Institute of Chemical Engineers.

  13. Cactus and Visapult: A case study of ultra-high performance distributed visualization using connectionless protocols

    SciTech Connect

    Shalf, John; Bethel, E. Wes

    2002-05-07

    This past decade has seen rapid growth in the size, resolution, and complexity of Grand Challenge simulation codes. Many such problems still require interactive visualization tools to make sense of multi-terabyte data stores. Visapult is a parallel volume rendering tool that employs distributed components, latency tolerant algorithms, and high performance network I/O for effective remote visualization of massive datasets. In this paper we discuss using connectionless protocols to accelerate Visapult network I/O and interfacing Visapult to the Cactus General Relativity code to enable scalable remote monitoring and steering capabilities. With these modifications, network utilization has moved from 25 percent of line-rate using tuned multi-streamed TCP to sustaining 88 percent of line rate using the new UDP-based transport protocol.

  14. Towards Scalable Cost-Effective Service and Survivability Provisioning in Ultra High Speed Networks

    SciTech Connect

    Bin Wang

    2006-12-01

    Optical transport networks based on wavelength division multiplexing (WDM) are considered to be the most appropriate choice for future Internet backbone. On the other hand, future DOE networks are expected to have the ability to dynamically provision on-demand survivable services to suit the needs of various high performance scientific applications and remote collaboration. Since a failure in aWDMnetwork such as a cable cut may result in a tremendous amount of data loss, efficient protection of data transport in WDM networks is therefore essential. As the backbone network is moving towards GMPLS/WDM optical networks, the unique requirement to support DOE’s science mission results in challenging issues that are not directly addressed by existing networking techniques and methodologies. The objectives of this project were to develop cost effective protection and restoration mechanisms based on dedicated path, shared path, preconfigured cycle (p-cycle), and so on, to deal with single failure, dual failure, and shared risk link group (SRLG) failure, under different traffic and resource requirement models; to devise efficient service provisioning algorithms that deal with application specific network resource requirements for both unicast and multicast; to study various aspects of traffic grooming in WDM ring and mesh networks to derive cost effective solutions while meeting application resource and QoS requirements; to design various diverse routing and multi-constrained routing algorithms, considering different traffic models and failure models, for protection and restoration, as well as for service provisioning; to propose and study new optical burst switched architectures and mechanisms for effectively supporting dynamic services; and to integrate research with graduate and undergraduate education. All objectives have been successfully met. This report summarizes the major accomplishments of this project. The impact of the project manifests in many aspects: First

  15. Investigating the Role of Biogeochemical Processes in the Northern High Latitudes on Global Climate Feedbacks Using an Efficient Scalable Earth System Model

    SciTech Connect

    Jain, Atul K.

    2016-09-14

    The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.

  16. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, Nb, ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10(-4) to 10(-3) to give acceptable compromise between efficiency and accuracy.

  17. SFT: Scalable Fault Tolerance

    SciTech Connect

    Petrini, Fabrizio; Nieplocha, Jarek; Tipparaju, Vinod

    2006-04-15

    In this paper we will present a new technology that we are currently developing within the SFT: Scalable Fault Tolerance FastOS project which seeks to implement fault tolerance at the operating system level. Major design goals include dynamic reallocation of resources to allow continuing execution in the presence of hardware failures, very high scalability, high efficiency (low overhead), and transparency—requiring no changes to user applications. Our technology is based on a global coordination mechanism, that enforces transparent recovery lines in the system, and TICK, a lightweight, incremental checkpointing software architecture implemented as a Linux kernel module. TICK is completely user-transparent and does not require any changes to user code or system libraries; it is highly responsive: an interrupt, such as a timer interrupt, can trigger a checkpoint in as little as 2.5μs; and it supports incremental and full checkpoints with minimal overhead—less than 6% with full checkpointing to disk performed as frequently as once per minute.

  18. Highly scalable, uniform, and sensitive biosensors based on top-down indium oxide nanoribbons and electronic enzyme-linked immunosorbent assay.

    PubMed

    Aroonyadet, Noppadol; Wang, Xiaoli; Song, Yan; Chen, Haitian; Cote, Richard J; Thompson, Mark E; Datar, Ram H; Zhou, Chongwu

    2015-03-11

    Nanostructure field-effect transistor (FET) biosensors have shown great promise for ultra sensitive biomolecular detection. Top-down assembly of these sensors increases scalability and device uniformity but faces fabrication challenges in achieving the small dimensions needed for sensitivity. We report top-down fabricated indium oxide (In2O3) nanoribbon FET biosensors using highly scalable radio frequency (RF) sputtering to create uniform channel thicknesses ranging from 50 to 10 nm. We combine this scalable sensing platform with amplification from electronic enzyme-linked immunosorbent assay (ELISA) to achieve high sensitivity to target analytes such as streptavidin and human immunodeficiency virus type 1 (HIV-1) p24 proteins. Our approach circumvents Debye screening in ionic solutions and detects p24 protein at 20 fg/mL (about 250 viruses/mL or about 3 orders of magnitude lower than commercial ELISA) with a 35% conduction change in human serum. The In2O3 nanoribbon biosensors have 100% device yield and use a simple 2 mask photolithography process. The electrical properties of 50 In2O3 nanoribbon FETs showed good uniformity in on-state current, on/off current ratio, mobility, and threshold voltage. In addition, the sensors show excellent pH sensitivity over a broad range (pH 4 to 9) as well as over the physiological-related pH range (pH 6.8 to 8.2). With the demonstrated sensitivity, scalability, and uniformity, the In2O3 nanoribbon sensor platform makes great progress toward clinical testing, such as for early diagnosis of acquired immunodeficiency syndrome (AIDS).

  19. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    PubMed Central

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  20. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation.

    PubMed

    Augustin, Christoph M; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J; Niederer, Steven A; Haase, Gundolf; Plank, Gernot

    2016-01-15

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  1. Anatomically accurate high resolution modeling of human whole heart electromechanics: A strongly scalable algebraic multigrid solver method for nonlinear deformation

    NASA Astrophysics Data System (ADS)

    Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot

    2016-01-01

    Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate

  2. Scalable and High-Throughput Execution of Clinical Quality Measures from Electronic Health Records using MapReduce and the JBoss® Drools Engine.

    PubMed

    Peterson, Kevin J; Pathak, Jyotishman

    2014-01-01

    Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss(®) Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs.

  3. Application of the FETI Method to ASCI Problems: Scalability Results on One Thousand Processors and Discussion of Highly Heterogeneous Problems

    SciTech Connect

    Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M; Pierson, K.; Rixen, D.

    1999-04-01

    We report on the application of the one-level FETI method to the solution of a class of substructural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues, and on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.

  4. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  5. Highly nitrogen-doped carbon capsules: scalable preparation and high-performance applications in fuel cells and lithium ion batteries

    NASA Astrophysics Data System (ADS)

    Hu, Chuangang; Xiao, Ying; Zhao, Yang; Chen, Nan; Zhang, Zhipan; Cao, Minhua; Qu, Liangti

    2013-03-01

    Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g-1 at a current density of 50 mA g-1 after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries.Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g-1 at a current density of 50 mA g-1 after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries. Electronic supplementary information (ESI) available: More experimental details and characterization. See DOI: 10

  6. Highly nitrogen-doped carbon capsules: scalable preparation and high-performance applications in fuel cells and lithium ion batteries.

    PubMed

    Hu, Chuangang; Xiao, Ying; Zhao, Yang; Chen, Nan; Zhang, Zhipan; Cao, Minhua; Qu, Liangti

    2013-04-07

    Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g(-1) at a current density of 50 mA g(-1) after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries.

  7. Scalable motion vector coding

    NASA Astrophysics Data System (ADS)

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter

    2004-11-01

    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  8. Scalable Node Monitoring

    SciTech Connect

    Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.

    2012-07-30

    Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to be used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.

  9. Volume server: A scalable high speed and high capacity magnetic tape archive architecture with concurrent multi-host access

    NASA Technical Reports Server (NTRS)

    Rybczynski, Fred

    1993-01-01

    A major challenge facing data processing centers today is data management. This includes the storage of large volumes of data and access to it. Current media storage for large data volumes is typically off line and frequently off site in warehouses. Access to data archived in this fashion can be subject to long delays, errors in media selection and retrieval, and even loss of data through misplacement or damage to the media. Similarly, designers responsible for architecting systems capable of continuous high-speed recording of large volumes of digital data are faced with the challenge of identifying technologies and configurations that meet their requirements. Past approaches have tended to evaluate the combination of the fastest tape recorders with the highest capacity tape media and then to compromise technology selection as a consequence of cost. This paper discusses an architecture that addresses both of these challenges and proposes a cost effective solution based on robots, high speed helical scan tape drives, and large-capacity media.

  10. Highly Efficient High-Pressure Homogenization Approach for Scalable Production of High-Quality Graphene Sheets and Sandwich-Structured α-Fe2O3/Graphene Hybrids for High-Performance Lithium-Ion Batteries.

    PubMed

    Qi, Xin; Zhang, Hao-Bin; Xu, Jiantie; Wu, Xinyu; Yang, Dongzhi; Qu, Jin; Yu, Zhong-Zhen

    2017-03-29

    A highly efficient and continuous high-pressure homogenization (HPH) approach is developed for scalable production of graphene sheets and sandwich-structured α-Fe2O3/graphene hybrids by liquid-phase exfoliation of stage-1 FeCl3-based graphite intercalation compounds (GICs). The enlarged interlayer spacing of FeCl3-GICs facilitates their efficient exfoliation to produce high-quality graphene sheets. Moreover, sandwich-structured α-Fe2O3/few-layer graphene (FLG) hybrids are readily fabricated by thermally annealing the FeCl3 intercalated FLG sheets. As an anode material of Li-ion battery, α-Fe2O3/FLG hybrid shows a satisfactory long-term cycling performance with an excellent specific capacity of 1100.5 mA h g(-1) after 350 cycles at 200 mA g(-1). A high reversible capacity of 658.5 mA h g(-1) is achieved after 200 cycles at 1 A g(-1) and maintained without notable decay. The satisfactory cycling stability and the outstanding capability of α-Fe2O3/FLG hybrid are attributed to its unique sandwiched structure consisting of highly conducting FLG sheets and covalently anchored α-Fe2O3 particles. Therefore, the highly efficient and scalable preparation of high-quality graphene sheets along with the excellent electrochemical properties of α-Fe2O3/FLG hybrids makes the HPH approach promising for producing high-performance graphene-based energy storage materials.

  11. OneBac: Platform for Scalable and High-Titer Production of Adeno-Associated Virus Serotype 1–12 Vectors for Gene Therapy

    PubMed Central

    Mietzsch, Mario; Grasse, Sabrina; Zurawski, Catherine; Weger, Stefan; Bennett, Antonette; Agbandje-McKenna, Mavis; Muzyczka, Nicholas; Zolotukhin, Sergei

    2014-01-01

    Abstract Scalable and genetically stable recombinant adeno-associated virus (rAAV) production systems combined with facile adaptability for an extended repertoire of AAV serotypes are required to keep pace with the rapidly increasing clinical demand. For scalable high-titer production of the full range of rAAV serotypes 1–12, we developed OneBac, consisting of stable insect Sf9 cell lines harboring silent copies of AAV1–12 rep and cap genes induced upon infection with a single baculovirus that also carries the rAAV genome. rAAV burst sizes reach up to 5×105 benzonase-resistant, highly infectious genomic particles per cell, exceeding typical yields of current rAAV production systems. In contrast to recombinant rep/cap baculovirus strains currently employed for large-scale rAAV production, the Sf9rep/cap cell lines are genetically stable, leading to undiminished rAAV burst sizes over serial passages. Thus, OneBac combines full AAV serotype options with the capacity for stable scale-up production, the current bottleneck for the transition of AAV from gene therapy trials to routine clinical treatment. PMID:24299301

  12. Scalable Production of the Silicon-Tin Yin-Yang Hybrid Structure with Graphene Coating for High Performance Lithium-Ion Battery Anodes.

    PubMed

    Jin, Yan; Tan, Yingling; Hu, Xiaozhen; Zhu, Bin; Zheng, Qinghui; Zhang, Zijiao; Zhu, Guoying; Yu, Qian; Jin, Zhong; Zhu, Jia

    2017-05-10

    Alloy anodes possessed of high theoretical capacity show great potential for next-generation advanced lithium-ion battery. Even though huge volume change during lithium insertion and extraction leads to severe problems, such as pulverization and an unstable solid-electrolyte interphase (SEI), various nanostructures including nanoparticles, nanowires, and porous networks can address related challenges to improve electrochemical performance. However, the complex and expensive fabrication process hinders the widespread application of nanostructured alloy anodes, which generate an urgent demand of low-cost and scalable processes to fabricate building blocks with fine controls of size, morphology, and porosity. Here, we demonstrate a scalable and low-cost process to produce a porous yin-yang hybrid composite anode with graphene coating through high energy ball-milling and selective chemical etching. With void space to buffer the expansion, the produced functional electrodes demonstrate stable cycling performance of 910 mAh g(-1) over 600 cycles at a rate of 0.5C for Si-graphene "yin" particles and 750 mAh g(-1) over 300 cycles at 0.2C for Sn-graphene "yang" particles. Therefore, we open up a new approach to fabricate alloy anode materials at low-cost, low-energy consumption, and large scale. This type of porous silicon or tin composite with graphene coating can also potentially play a significant role in thermoelectrics and optoelectronics applications.

  13. Scalable Synthesis of Few-Layer MoS2 Incorporated into Hierarchical Porous Carbon Nanosheets for High-Performance Li- and Na-Ion Battery Anodes.

    PubMed

    Park, Seung-Keun; Lee, Jeongyeon; Bong, Sungyool; Jang, Byungchul; Seong, Kwang-Dong; Piao, Yuanzhe

    2016-08-03

    It is still a challenging task to develop a facile and scalable process to synthesize porous hybrid materials with high electrochemical performance. Herein, a scalable strategy is developed for the synthesis of few-layer MoS2 incorporated into hierarchical porous carbon (MHPC) nanosheet composites as anode materials for both Li- (LIB) and Na-ion battery (SIB). An inexpensive oleylamine (OA) is introduced to not only serve as a hinder the stacking of MoS2 nanosheets but also to provide a conductive carbon, allowing large scale production. In addition, a SiO2 template is adopted to direct the growth of both carbon and MoS2 nanosheets, resulting in the formation of hierarchical porous structures with interconnected networks. Due to these unique features, the as-obtained MHPC shows substantial reversible capacity and very long cycling performance when used as an anode material for LIBs and SIBs, even at high current density. Indeed, this material delivers reversible capacities of 732 and 280 mA h g(-1) after 300 cycles at 1 A g(-1) in LIBs and SIBs, respectively. The results suggest that these MHPC composites also have tremendous potential for applications in other fields.

  14. Scalable computations in penetration mechanics

    SciTech Connect

    Kimsey, K.D.; Schraml, S.J.; Hertel, E.S.

    1998-01-01

    This paper presents an overview of an explicit message passing paradigm for an Eulerian finite volume method for modeling solid dynamics problems involving shock wave propagation, multiple materials, and large deformations. Three-dimensional simulations of high-velocity impact were conducted on the IBM SP2, the SGI Power challenge Array, and the SGI Origin 2000. The scalability of the message-passing code on distributed-memory and symmetric multiprocessor architectures is presented and compared to the ideal linear performance.

  15. Scalable synthesis of interconnected porous silicon/carbon composites by the Rochow reaction as high-performance anodes of lithium ion batteries.

    PubMed

    Zhang, Zailei; Wang, Yanhong; Ren, Wenfeng; Tan, Qiangqiang; Chen, Yunfa; Li, Hong; Zhong, Ziyi; Su, Fabing

    2014-05-12

    Despite the promising application of porous Si-based anodes in future Li ion batteries, the large-scale synthesis of these materials is still a great challenge. A scalable synthesis of porous Si materials is presented by the Rochow reaction, which is commonly used to produce organosilane monomers for synthesizing organosilane products in chemical industry. Commercial Si microparticles reacted with gas CH3 Cl over various Cu-based catalyst particles to substantially create macropores within the unreacted Si accompanying with carbon deposition to generate porous Si/C composites. Taking advantage of the interconnected porous structure and conductive carbon-coated layer after simple post treatment, these composites as anodes exhibit high reversible capacity and long cycle life. It is expected that by integrating the organosilane synthesis process and controlling reaction conditions, the manufacture of porous Si-based anodes on an industrial scale is highly possible. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Highly flexible, transparent and self-cleanable superhydrophobic films prepared by a facile and scalable nanopyramid formation technique

    NASA Astrophysics Data System (ADS)

    Kong, Jeong-Ho; Kim, Tae-Hyun; Kim, Ji Hoon; Park, Jong-Kweon; Lee, Deug-Woo; Kim, Soo-Hyung; Kim, Jong-Man

    2014-01-01

    A facile and scalable technique to fabricate optically transparent, mechanically flexible and self-cleanable superhydrophobic films for practical solar cell applications is proposed. The superhydrophobic films were fabricated simply by transferring a transparent porous alumina layer, which was prepared using an anodic aluminium oxidation (AAO) technique, onto a polyethylene terephthalate (PET) film with a UV-curable polymer adhesive layer, followed by the subsequent formation of alumina nano pyramids (NPs) through the time-controlled chemical etching of the transferred porous alumina membrane (PAM). It was found experimentally that the proposed functional films can ensure the superhydrophobicity in the Cassie-Baxter wetting mode with superior water-repellent properties through a series of experimental observations including static contact angle (SCA), contact angle hysteresis (CAH), sliding behaviour on the tilted film, and dynamic behaviour of the liquid droplet impacting on the film. In addition to the superior surface wetting properties, an optical transmittance of ~79% at a light wavelength of 550 nm was achieved. Furthermore, there was no significant degradation in both the surface wetting properties and morphology even after 1500-cycles of repetitive bending tests, which indicates that the proposed superhydrophobic film is mechanically robust. Finally, the practicability of the proposed self-cleanable film was proven quantitatively by observing the changes in the power conversion efficiency (PCE) of a photovoltaic device covering the film before and after the cleaning process.

  17. Scalable synthesis of hierarchical macropore-rich activated carbon microspheres assembled by carbon nanoparticles for high rate performance supercapacitors

    NASA Astrophysics Data System (ADS)

    Zhang, Dongdong; Zhao, Jianghong; Feng, Chong; Zhao, Rijie; Sun, Yahui; Guan, Taotao; Han, Baixin; Tang, Nan; Wang, Jianlong; Li, Kaixi; Qiao, Jinli; Zhang, Jiujun

    2017-02-01

    A scalable inverse-microemulsion-polymerization-phase-separation coupling method is applied to successfully prepare hierarchical macropore-rich activated carbon microspheres (ACS) using a phenolic resin (PR) precursor followed by carbonization and KOH activation for the first time. The formed ACS materials are assembled by carbon nanoparticles (CNPs). The macropores interspersed among the component CNPs are formed after removing the non-reactive solvent phase in the course of the polymerization of the reactive PR phase, which occupies ∼64% of the total pore volume (∼2.779 cm3 g-1) of the optimized ACS. In combination with mesopores (∼18% of the total pore volume), the ACS possesses meso/macropores approaching 82% of the total pore volume. Micropores are created in the component CNPs via KOH activation, showing shortened ion transport distances in the nanoscale dimension. Both the hierarchical micro/meso/macroporous structure and the inner nanoparticle morphology (short ion diffusion pathways) can significantly contribute to the rapid transport of electrolyte ions throughout the carbonaceous matrix, resulting in superior rate performance of ACS-based supercapacitors. More importantly, the energy densities of the ACS supercapacitors operating in both aqueous and organic electrolyte retain steady over a wide range of power densities varying dramatically from 0.25 to 14.5 kW kg-1 and to 7.0 kW kg-1, respectively.

  18. Scalable fabrication of high-power graphene micro-supercapacitors for flexible and on-chip energy storage.

    PubMed

    El-Kady, Maher F; Kaner, Richard B

    2013-01-01

    The rapid development of miniaturized electronic devices has increased the demand for compact on-chip energy storage. Microscale supercapacitors have great potential to complement or replace batteries and electrolytic capacitors in a variety of applications. However, conventional micro-fabrication techniques have proven to be cumbersome in building cost-effective micro-devices, thus limiting their widespread application. Here we demonstrate a scalable fabrication of graphene micro-supercapacitors over large areas by direct laser writing on graphite oxide films using a standard LightScribe DVD burner. More than 100 micro-supercapacitors can be produced on a single disc in 30 min or less. The devices are built on flexible substrates for flexible electronics and on-chip uses that can be integrated with MEMS or CMOS in a single chip. Remarkably, miniaturizing the devices to the microscale results in enhanced charge-storage capacity and rate capability. These micro-supercapacitors demonstrate a power density of ~200 W cm-3, which is among the highest values achieved for any supercapacitor.

  19. Scalable fabrication of high-power graphene micro-supercapacitors for flexible and on-chip energy storage

    NASA Astrophysics Data System (ADS)

    El-Kady, Maher F.; Kaner, Richard B.

    2013-02-01

    The rapid development of miniaturized electronic devices has increased the demand for compact on-chip energy storage. Microscale supercapacitors have great potential to complement or replace batteries and electrolytic capacitors in a variety of applications. However, conventional micro-fabrication techniques have proven to be cumbersome in building cost-effective micro-devices, thus limiting their widespread application. Here we demonstrate a scalable fabrication of graphene micro-supercapacitors over large areas by direct laser writing on graphite oxide films using a standard LightScribe DVD burner. More than 100 micro-supercapacitors can be produced on a single disc in 30 min or less. The devices are built on flexible substrates for flexible electronics and on-chip uses that can be integrated with MEMS or CMOS in a single chip. Remarkably, miniaturizing the devices to the microscale results in enhanced charge-storage capacity and rate capability. These micro-supercapacitors demonstrate a power density of ~200 W cm-3, which is among the highest values achieved for any supercapacitor.

  20. Highly flexible, transparent and self-cleanable superhydrophobic films prepared by a facile and scalable nanopyramid formation technique.

    PubMed

    Kong, Jeong-Ho; Kim, Tae-Hyun; Kim, Ji Hoon; Park, Jong-Kweon; Lee, Deug-Woo; Kim, Soo-Hyung; Kim, Jong-Man

    2014-01-01

    A facile and scalable technique to fabricate optically transparent, mechanically flexible and self-cleanable superhydrophobic films for practical solar cell applications is proposed. The superhydrophobic films were fabricated simply by transferring a transparent porous alumina layer, which was prepared using an anodic aluminium oxidation (AAO) technique, onto a polyethylene terephthalate (PET) film with a UV-curable polymer adhesive layer, followed by the subsequent formation of alumina nano pyramids (NPs) through the time-controlled chemical etching of the transferred porous alumina membrane (PAM). It was found experimentally that the proposed functional films can ensure the superhydrophobicity in the Cassie-Baxter wetting mode with superior water-repellent properties through a series of experimental observations including static contact angle (SCA), contact angle hysteresis (CAH), sliding behaviour on the tilted film, and dynamic behaviour of the liquid droplet impacting on the film. In addition to the superior surface wetting properties, an optical transmittance of ∼79% at a light wavelength of 550 nm was achieved. Furthermore, there was no significant degradation in both the surface wetting properties and morphology even after 1500-cycles of repetitive bending tests, which indicates that the proposed superhydrophobic film is mechanically robust. Finally, the practicability of the proposed self-cleanable film was proven quantitatively by observing the changes in the power conversion efficiency (PCE) of a photovoltaic device covering the film before and after the cleaning process.

  1. A scalable tools communication infrastructure.

    SciTech Connect

    Buntinas, D.; Bosilca, G.; Graham, R. L.; Vallee, G.; Watson, G. R.; Mathematics and Computer Science; Univ. of Tennessee; ORNL; IBM

    2008-07-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  2. A Scalable Tools Communication Infrastructure

    SciTech Connect

    Buntinas, Darius; Bosilca, George; Graham, Richard L; Vallee, Geoffroy R; Watson, Gregory R.

    2008-01-01

    The Scalable Tools Communication Infrastructure (STCI) is an open source collaborative effort intended to provide high-performance, scalable, resilient, and portable communications and process control services for a wide variety of user and system tools. STCI is aimed specifically at tools for ultrascale computing and uses a component architecture to simplify tailoring the infrastructure to a wide range of scenarios. This paper describes STCI's design philosophy, the various components that will be used to provide an STCI implementation for a range of ultrascale platforms, and a range of tool types. These include tools supporting parallel run-time environments, such as MPI, parallel application correctness tools and performance analysis tools, as well as system monitoring and management tools.

  3. A Scalable Media Multicasting Scheme

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    IP multicast has been proved to be unfeasible for deployment, Application Layer Multicast (ALM) Based on end multicast system is practical and more scalable than IP multicast in Internet. In this paper, an ALM protocol called Scalable multicast for High Definition streaming media (SHD) is proposed in which end to end transmission capability is fully cultivated for HD media transmission without increasing much control overhead. Similar to the transmission style of BiTtorrent, hosts only forward part of data piece according to the available bandwidth that improves the usage of bandwidth greatly. On the other hand, some novel strategies are adopted to overcome the disadvantages of BiTtorrent protocol in streaming media transmission. Data transmission between hosts is implemented in many-one transmission style in Hierarchical architecture in most circumstances. Simulations implemented on Internet-like topology indicate that SHD achieves low link stress, end to end latency and stability.

  4. Facile and Scalable Synthesis Method for High-Quality Few-Layer Graphene through Solution-Based Exfoliation of Graphite.

    PubMed

    Wee, Boon-Hong; Wu, Tong-Fei; Hong, Jong-Dal

    2017-02-08

    Here we describe a facile and scalable method for preparing defect-free graphene sheets exfoliated from graphite using the positively charged polyelectrolyte precursor poly(p-phenylenevinylene) (PPV-pre) as a stabilizer in an aqueous solution. The graphene exfoliated by PPV-pre was apparently stabilized in the solution as a form of graphene/PPV-pre (denoted to GPPV-pre), which remains in a homogeneous dispersion over a year. The thickness values of 300 selected 76% GPPV-pre flakes ranged from 1 to 10 nm, corresponding to between one and a few layers of graphene in the lateral dimensions of 1 to 2 μm. Furthermore, this approach was expected to yield a marked decrease in the density of defects in the electronic conjugation of graphene compared to that of graphene oxide (GO) obtained by Hummers' method. The positively charged GPPV-pre was employed to fabricate a poly(ethylene terephthalate) (PET) electrode layer-by-layer with negatively charged GO, yielding (GPPV-pre/GO)n film electrode. The PPV-pre and GO in the (GPPV-pre/GO)n films were simultaneously converted using hydroiodic acid vapor to fully conjugated PPV and reduced graphene oxide (RGO), respectively. The electrical conductivity of (GPPV/RGO)23 multilayer films was 483 S/cm, about three times greater than that of the (PPV/RGO)23 multilayer films (166 S/cm) comprising RGO (prepared by Hummers method). Furthermore, the superior electrical properties of GPPV were made evident, when comparing the capacitive performances of two supercapacitor systems; (polyaniline PANi/RGO)30/(GPPV/RGO)23/PET (volumetric capacitance = 216 F/cm(3); energy density = 19 mWh/cm(3); maximum power density = 498 W/cm(3)) and (PANi/RGO)30/(PPV/RGO)23/PET (152 F/cm(3); 9 mWh/cm(3); 80 W/cm(3)).

  5. A 1T-DRAM cell based on a tunnel field-effect transistor with highly-scalable pillar and surrounding gate structure

    NASA Astrophysics Data System (ADS)

    Kim, Hyungjin; Park, Byung-Gook

    2016-08-01

    In this work, a 1-transistor (1T) dynamic random access memory (DRAM) cell based on a tunnel field-effect transistor (TFET) is introduced and its operation physics demonstrated. It is structurally based on a pillar structure and surrounding gate, which gives a high scalability compared with the conventional 1T-1 capacitor (1C) DRAM cell so it can be easily made into a 4F2 cell array. The program operation is performed not by hole generation through impact ionization or gate-induced drain leakage but by hole injection from the source region unlike other 1T DRAM cells. In addition, the tunneling current mechanism of the device gives low power consumption DRAM operation and good retention characteristics to the proposed device.

  6. Sandia Scalable Encryption Software

    SciTech Connect

    Tarman, Thomas D.

    1997-08-13

    Sandia Scalable Encryption Library (SSEL) Version 1.0 is a library of functions that implement Sandia''s scalable encryption algorithm. This algorithm is used to encrypt Asynchronous Transfer Mode (ATM) data traffic, and is capable of operating on an arbitrary number of bits at a time (which permits scaling via parallel implementations), while being interoperable with differently scaled versions of this algorithm. The routines in this library implement 8 bit and 32 bit versions of a non-linear mixer which is compatible with Sandia''s hardware-based ATM encryptor.

  7. Scalable Parallel Utopia

    SciTech Connect

    King, D.; Pierson, L.

    1998-10-01

    This contribution proposes a 128 bit wide interface structure clocked at approximately 80 MHz that will operate at 10 Gbps as a strawman for a 0C192C Utopia Specification. In addition, the concept of scalable width of data transfers in order to maintain manageably low clock rates is proposed.

  8. N- and S-doped high surface area carbon derived from soya chunks as scalable and efficient electrocatalysts for oxygen reduction

    PubMed Central

    Rana, Moumita; Arora, Gunjan; Gautam, Ujjal K

    2015-01-01

    Highly stable, cost-effective electrocatalysts facilitating oxygen reduction are crucial for the commercialization of membrane-based fuel cell and battery technologies. Herein, we demonstrate that protein-rich soya chunks with a high content of N, S and P atoms are an excellent precursor for heteroatom-doped highly graphitized carbon materials. The materials are nanoporous, with a surface area exceeding 1000 m2 g−1, and they are tunable in doping quantities. These materials exhibit highly efficient catalytic performance toward oxygen reduction reaction (ORR) with an onset potential of −0.045 V and a half-wave potential of −0.211 V (versus a saturated calomel electrode) in a basic medium, which is comparable to commercial Pt catalysts and is better than other recently developed metal-free carbon-based catalysts. These exhibit complete methanol tolerance and a performance degradation of merely ∼5% as compared to ∼14% for a commercial Pt/C catalyst after continuous use for 3000 s at the highest reduction current. We found that the fraction of graphitic N increases at a higher graphitization temperature, leading to the near complete reduction of oxygen. It is believed that due to the easy availability of the precursor and the possibility of genetic engineering to homogeneously control the heteroatom distribution, the synthetic strategy is easily scalable, with further improvement in performance. PMID:27877746

  9. N- and S-doped high surface area carbon derived from soya chunks as scalable and efficient electrocatalysts for oxygen reduction

    NASA Astrophysics Data System (ADS)

    Rana, Moumita; Arora, Gunjan; Gautam, Ujjal K.

    2015-02-01

    Highly stable, cost-effective electrocatalysts facilitating oxygen reduction are crucial for the commercialization of membrane-based fuel cell and battery technologies. Herein, we demonstrate that protein-rich soya chunks with a high content of N, S and P atoms are an excellent precursor for heteroatom-doped highly graphitized carbon materials. The materials are nanoporous, with a surface area exceeding 1000 m2 g-1, and they are tunable in doping quantities. These materials exhibit highly efficient catalytic performance toward oxygen reduction reaction (ORR) with an onset potential of -0.045 V and a half-wave potential of -0.211 V (versus a saturated calomel electrode) in a basic medium, which is comparable to commercial Pt catalysts and is better than other recently developed metal-free carbon-based catalysts. These exhibit complete methanol tolerance and a performance degradation of merely ˜5% as compared to ˜14% for a commercial Pt/C catalyst after continuous use for 3000 s at the highest reduction current. We found that the fraction of graphitic N increases at a higher graphitization temperature, leading to the near complete reduction of oxygen. It is believed that due to the easy availability of the precursor and the possibility of genetic engineering to homogeneously control the heteroatom distribution, the synthetic strategy is easily scalable, with further improvement in performance.

  10. Facile and Scalable Fabrication of Highly Efficient Lead Iodide Perovskite Thin-Film Solar Cells in Air Using Gas Pump Method.

    PubMed

    Ding, Bin; Gao, Lili; Liang, Lusheng; Chu, Qianqian; Song, Xiaoxuan; Li, Yan; Yang, Guanjun; Fan, Bin; Wang, Mingkui; Li, Chengxin; Li, Changjiu

    2016-08-10

    Control of the perovskite film formation process to produce high-quality organic-inorganic metal halide perovskite thin films with uniform morphology, high surface coverage, and minimum pinholes is of great importance to highly efficient solar cells. Herein, we report on large-area light-absorbing perovskite films fabrication with a new facile and scalable gas pump method. By decreasing the total pressure in the evaporation environment, the gas pump method can significantly enhance the solvent evaporation rate by 8 times faster and thereby produce an extremely dense, uniform, and full-coverage perovskite thin film. The resulting planar perovskite solar cells can achieve an impressive power conversion efficiency up to 19.00% with an average efficiency of 17.38 ± 0.70% for 32 devices with an area of 5 × 2 mm, 13.91% for devices with a large area up to 1.13 cm(2). The perovskite films can be easily fabricated in air conditions with a relative humidity of 45-55%, which definitely has a promising prospect in industrial application of large-area perovskite solar panels.

  11. SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering

    DTIC Science & Technology

    2013-11-01

    paradigms, one might consider use any of the highly scalable batched Map-Reduce technologies as, for example, implemented in Hadoop [10]. Although...extremely scalable for information processing, this approach cannot pro- vide a scalable, low-latency approach to information. Hadoop needs to register...information in the Hadoop NameNode ser- vice, and then read from disk for any brokering function that could be supported by Hadoop . Whereas successful

  12. Cost-effective scalable synthesis of mesoporous germanium particles via a redox-transmetalation reaction for high-performance energy storage devices.

    PubMed

    Choi, Sinho; Kim, Jieun; Choi, Nam-Soon; Kim, Min Gyu; Park, Soojin

    2015-02-24

    Nanostructured germanium is a promising material for high-performance energy storage devices. However, synthesizing it in a cost-effective and simple manner on a large scale remains a significant challenge. Herein, we report a redox-transmetalation reaction-based route for the large-scale synthesis of mesoporous germanium particles from germanium oxide at temperatures of 420-600 °C. We could confirm that a unique redox-transmetalation reaction occurs between Zn(0) and Ge(4+) at approximately 420 °C using temperature-dependent in situ X-ray absorption fine structure analysis. This reaction has several advantages, which include (i) the successful synthesis of germanium particles at a low temperature (∼450 °C), (ii) the accommodation of large volume changes, owing to the mesoporous structure of the germanium particles, and (iii) the ability to synthesize the particles in a cost-effective and scalable manner, as inexpensive metal oxides are used as the starting materials. The optimized mesoporous germanium anode exhibits a reversible capacity of ∼1400 mA h g(-1) after 300 cycles at a rate of 0.5 C (corresponding to the capacity retention of 99.5%), as well as stable cycling in a full cell containing a LiCoO2 cathode with a high energy density (charge capacity = 286.62 mA h cm(-3)).

  13. Rad-Hard, Miniaturized, Scalable, High-Voltage Switching Module for Power Applications Rad-Hard, Miniaturized

    NASA Technical Reports Server (NTRS)

    Adell, Philippe C.; Mojarradi, Mohammad; DelCastillo, Linda Y.; Vo, Tuan A.

    2011-01-01

    A paper discusses the successful development of a miniaturized radiation hardened high-voltage switching module operating at 2.5 kV suitable for space application. The high-voltage architecture was designed, fabricated, and tested using a commercial process that uses a unique combination of 0.25 micrometer CMOS (complementary metal oxide semiconductor) transistors and high-voltage lateral DMOS (diffusion metal oxide semiconductor) device with high breakdown voltage (greater than 650 V). The high-voltage requirements are achieved by stacking a number of DMOS devices within one module, while two modules can be placed in series to achieve higher voltages. Besides the high-voltage requirements, a second generation prototype is currently being developed to provide improved switching capabilities (rise time and fall time for full range of target voltages and currents), the ability to scale the output voltage to a desired value with good accuracy (few percent) up to 10 kV, to cover a wide range of high-voltage applications. In addition, to ensure miniaturization, long life, and high reliability, the assemblies will require intensive high-voltage electrostatic modeling (optimized E-field distribution throughout the module) to complete the proposed packaging approach and test the applicability of using advanced materials in a space-like environment (temperature and pressure) to help prevent potential arcing and corona due to high field regions. Finally, a single-event effect evaluation would have to be performed and single-event mitigation methods implemented at the design and system level or developed to ensure complete radiation hardness of the module.

  14. Scalable high-fidelity growth of semiconductor nanorod arrays with controlled geometry for photovoltaic devices using block copolymers.

    PubMed

    Pelligra, Candice I; Huang, Su; Singer, Jonathan P; Mayo, Anthony T; Mu, Richard R; Osuji, Chinedum O

    2014-11-12

    Controlled density semiconducting oxide arrays are highly desirable for matching nanometer length scales specific to emerging applications. This work demonstrates a facile one-step method for templating hydrothermal growth which provides arrays with high-fidelity tuning of nanorod spacing and diameter. This solution-based method leverages the selective swelling of block copolymer micelle templates, which can be rationally designed by tuning molecular weight and volume fraction. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Scalable Synthesis of Ag Networks with Optimized Sub-monolayer Au-Pd Nanoparticle Covering for Highly Enhanced SERS Detection and Catalysis

    NASA Astrophysics Data System (ADS)

    Li, Tianyu; Vongehr, Sascha; Tang, Shaochun; Dai, Yuming; Huang, Xiao; Meng, Xiangkang

    2016-11-01

    Highly porous tri-metallic AgxAuyPdz networks with a sub-monolayer bimetallic Au-Pd nanoparticle coating were synthesized via a designed galvanic replacement reaction of Ag nanosponges suspended in mixed solutions of HAuCl4 and K2PdCl4. The resulting networks’ ligaments have a rough surface with bimetallic nanoparticles and nanopores due to removal of Ag. The surface morphology and composition are adjustable by the temperature and mixed solutions’ concentration. Very low combined Au and Pd atomic percentage (1‑x) where x is atomic percentage of Ag leads to sub-monolayer nanoparticle coverings allowing a large number of active boundaries, nanopores, and metal-metal interfaces to be accessible. Optimization of the Au/Pd atomic ratio y/z obtains large surface-enhanced Raman scattering detection sensitivity (at y/z = 5.06) and a higher catalytic activity (at y/z = 3.55) toward reduction reactions as benchmarked with 4-nitrophenol than for most bimetallic catalysts. Subsequent optimization of x (at fixed y/z) further increases the catalytic activity to obtain a superior tri-metallic catalyst, which is mainly attributed to the synergy of several aspects including the large porosity, increased surface roughness, accessible interfaces, and hydrogen absorption capacity of nanosized Pd. This work provides a new concept for scalable synthesis and performance optimization of tri-metallic nanostructures.

  16. An electrochemical and structural study of highly uniform tin oxide nanowires fabricated by a novel, scalable solvoplasma technique as anode material for sodium ion batteries

    NASA Astrophysics Data System (ADS)

    Mukherjee, Santanu; Schuppert, Nicholas; Bates, Alex; Jasinski, Jacek; Hong, Jong-Eun; Choi, Moon Jong; Park, Sam

    2017-04-01

    A novel solvoplasma based technique was used to fabricate highly uniform SnO2 nanowires (NWs) for application as an anode in sodium-ion batteries (SIBs). This technique is scalable, rapid, and utilizes a rigorous cleaning process to produce very pure SnO2 NWs with enhanced porosity; which improves sodium-ion hosting and reaction kinetics. The batch of NWs obtained from the plasma process were named the ;as-made; sample and after cleaning the ;pure; sample. Structural characterization showed that the as-made sample has a K+ ion impurity which is absent in the pure samples. The pure samples have a higher maximum specific capacity, 400.71 mAhg-1, and Coulombic efficiency, 85%, compared to the as-made samples which have a maximum specific capacity of 174.69 mAhg-1 and Coulombic efficiency of 74% upon cycling. A study of the electrochemical impedance spectra showed that the as-made samples have a higher interfacial and diffusion resistance than the pure samples and resistances increased after 50 cycles of cell operation for both samples due to progressive electrode degradation. Specific energy vs specific power plots were employed to analyze the performance of the system with respect to the working conditions.

  17. Scalable synthesis of hierarchical hollow Li4Ti5O12 microspheres assembled by zigzag-like nanosheets for high rate lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Zhu, Kunxu; Gao, Hanyang; Hu, Guoxin; Liu, Mengjing; Wang, Haochen

    2017-02-01

    Electrochemical performance, abundance and cost are three crucial criteria to comprehensively evaluate the feasibility of Li4Ti5O12 as an electrode material for lithium-ion batteries (LIBs). Herein, hierarchical hollow Li4Ti5O12 microspheres (HLTOMs) assembled by zigzag-like nanosheets are synthesized by hydrothermal treatment of scalable lithium peroxotitanate complex solution using low-cost commercial H2TiO3 particles as titanium sources, followed by a calcination treatment. Precursor solution concentration, Li/Ti ratio, hydrothermal temperature and duration are found correlative and should be optimized to obtain pure Li4Ti5O12 products. A high yield of HLTOMs up to 120 g L-1 was achieved. Due to the unique morphology, the HLTOMs deliver an outstanding rate capability of 139, 125 and 108 mA h g-1 at 10, 20 and 30 C, respectively, and exhibit 94% capacity retention after 1000 cycles at 30C indicating excellent stability. These values are much superior to those of commercial Li4Ti5O12 particles (CLTOPs), showing HLTOMs are promising anode materials for LIBs.

  18. Scalable Synthesis of Ag Networks with Optimized Sub-monolayer Au-Pd Nanoparticle Covering for Highly Enhanced SERS Detection and Catalysis

    PubMed Central

    Li, Tianyu; Vongehr, Sascha; Tang, Shaochun; Dai, Yuming; Huang, Xiao; Meng, Xiangkang

    2016-01-01

    Highly porous tri-metallic AgxAuyPdz networks with a sub-monolayer bimetallic Au-Pd nanoparticle coating were synthesized via a designed galvanic replacement reaction of Ag nanosponges suspended in mixed solutions of HAuCl4 and K2PdCl4. The resulting networks’ ligaments have a rough surface with bimetallic nanoparticles and nanopores due to removal of Ag. The surface morphology and composition are adjustable by the temperature and mixed solutions’ concentration. Very low combined Au and Pd atomic percentage (1−x) where x is atomic percentage of Ag leads to sub-monolayer nanoparticle coverings allowing a large number of active boundaries, nanopores, and metal-metal interfaces to be accessible. Optimization of the Au/Pd atomic ratio y/z obtains large surface-enhanced Raman scattering detection sensitivity (at y/z = 5.06) and a higher catalytic activity (at y/z = 3.55) toward reduction reactions as benchmarked with 4-nitrophenol than for most bimetallic catalysts. Subsequent optimization of x (at fixed y/z) further increases the catalytic activity to obtain a superior tri-metallic catalyst, which is mainly attributed to the synergy of several aspects including the large porosity, increased surface roughness, accessible interfaces, and hydrogen absorption capacity of nanosized Pd. This work provides a new concept for scalable synthesis and performance optimization of tri-metallic nanostructures. PMID:27845400

  19. SWIFT—Scalable Clustering for Automated Identification of Rare Cell Populations in Large, High-Dimensional Flow Cytometry Datasets, Part 1: Algorithm Design

    PubMed Central

    Naim, Iftekhar; Datta, Suprakash; Rebhahn, Jonathan; Cavenaugh, James S; Mosmann, Tim R; Sharma, Gaurav

    2014-01-01

    We present a model-based clustering method, SWIFT (Scalable Weighted Iterative Flow-clustering Technique), for digesting high-dimensional large-sized datasets obtained via modern flow cytometry into more compact representations that are well-suited for further automated or manual analysis. Key attributes of the method include the following: (a) the analysis is conducted in the multidimensional space retaining the semantics of the data, (b) an iterative weighted sampling procedure is utilized to maintain modest computational complexity and to retain discrimination of extremely small subpopulations (hundreds of cells from datasets containing tens of millions), and (c) a splitting and merging procedure is incorporated in the algorithm to preserve distinguishability between biologically distinct populations, while still providing a significant compaction relative to the original data. This article presents a detailed algorithmic description of SWIFT, outlining the application-driven motivations for the different design choices, a discussion of computational complexity of the different steps, and results obtained with SWIFT for synthetic data and relatively simple experimental data that allow validation of the desirable attributes. A companion paper (Part 2) highlights the use of SWIFT, in combination with additional computational tools, for more challenging biological problems. © 2014 The Authors. Published by Wiley Periodicals Inc. PMID:24677621

  20. SWIFT-scalable clustering for automated identification of rare cell populations in large, high-dimensional flow cytometry datasets, part 1: algorithm design.

    PubMed

    Naim, Iftekhar; Datta, Suprakash; Rebhahn, Jonathan; Cavenaugh, James S; Mosmann, Tim R; Sharma, Gaurav

    2014-05-01

    We present a model-based clustering method, SWIFT (Scalable Weighted Iterative Flow-clustering Technique), for digesting high-dimensional large-sized datasets obtained via modern flow cytometry into more compact representations that are well-suited for further automated or manual analysis. Key attributes of the method include the following: (a) the analysis is conducted in the multidimensional space retaining the semantics of the data, (b) an iterative weighted sampling procedure is utilized to maintain modest computational complexity and to retain discrimination of extremely small subpopulations (hundreds of cells from datasets containing tens of millions), and (c) a splitting and merging procedure is incorporated in the algorithm to preserve distinguishability between biologically distinct populations, while still providing a significant compaction relative to the original data. This article presents a detailed algorithmic description of SWIFT, outlining the application-driven motivations for the different design choices, a discussion of computational complexity of the different steps, and results obtained with SWIFT for synthetic data and relatively simple experimental data that allow validation of the desirable attributes. A companion paper (Part 2) highlights the use of SWIFT, in combination with additional computational tools, for more challenging biological problems. © 2014 The Authors. Published by Wiley Periodicals Inc. on behalf of the International Society for Advancement of Cytometry.

  1. Sustainable and scalable production of monodisperse and highly uniform colloidal carbonaceous spheres using sodium polyacrylate as the dispersant.

    PubMed

    Gong, Yutong; Xie, Lei; Li, Haoran; Wang, Yong

    2014-10-28

    Monodisperse, uniform colloidal carbonaceous spheres were fabricated by the hydrothermal treatment of glucose with the help of a tiny amount of sodium polyacrylate (PAANa). This synthetic strategy is effective at high glucose concentration and for scale-up experiments. The sphere size can be easily tuned by the reaction time, temperature and glucose concentration.

  2. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  3. A low-cost, scalable, current-sensing digital headstage for high channel count μECoG

    NASA Astrophysics Data System (ADS)

    Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N.; Froemke, Robert C.; Viventi, Jonathan

    2017-04-01

    Objective. High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. Approach. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. Main results. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. Significance. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but

  4. A low-cost, scalable, current-sensing digital headstage for high channel count μECoG.

    PubMed

    Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N; Froemke, Robert C; Viventi, Jonathan

    2017-04-01

    High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but also resulted in a small, light headstage that can

  5. Controlled Scalable Synthesis of Uniform, High-Quality Monolayer and Few-layer MoS2 Films

    PubMed Central

    Yu, Yifei; Li, Chun; Liu, Yi; Su, Liqin; Zhang, Yong; Cao, Linyou

    2013-01-01

    Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research. PMID:23689610

  6. Facile and Scalable Preparation of Graphene Oxide-Based Magnetic Hybrids for Fast and Highly Efficient Removal of Organic Dyes

    PubMed Central

    Jiao, Tifeng; Liu, Yazhou; Wu, Yitian; Zhang, Qingrui; Yan, Xuehai; Gao, Faming; Bauer, Adam J. P.; Liu, Jianzhao; Zeng, Tingying; Li, Bingbing

    2015-01-01

    This study reports the facile preparation and the dye removal efficiency of nanohybrids composed of graphene oxide (GO) and Fe3O4 nanoparticles with various geometrical structures. In comparison to previously reported GO/Fe3O4 composites prepared through the one-pot, in situ deposition of Fe3O4 nanoparticles, the GO/Fe3O4 nanohybrids reported here were obtained by taking advantage of the physical affinities between sulfonated GO and Fe3O4 nanoparticles, which allows tuning the dimensions and geometries of Fe3O4 nanoparticles in order to decrease their contact area with GO, while still maintaining the magnetic properties of the nanohybrids for easy separation and adsorbent recycling. Both the as-prepared and regenerated nanohybrids demonstrate a nearly 100% removal rate for methylene blue and an impressively high removal rate for Rhodamine B. This study provides new insights into the facile and controllable industrial scale fabrication of safe and highly efficient GO-based adsorbents for dye or other organic pollutants in a wide range of environmental-related applications. PMID:26220847

  7. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    NASA Astrophysics Data System (ADS)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  8. Highly Stereoselective and Scalable anti-Aldol Reactions using N-(p-dodecylphenylsulfonyl)-2-Pyrrolidinecarboxamide: Scope and Origins of Stereoselectivities

    PubMed Central

    Yang, Hua; Mahapatra, Subham; Cheong, Paul Ha-Yeon; Carter, Rich G.

    2010-01-01

    A highly enantio- and diastereoselective anti-aldol process (up to >99% ee, >99:1 dr) catalyzed by a proline mimetic – N-(p-dodecylphenylsulfonyl)-2-pyrrolidinecarboxamide – has been developed. Catalyst loading as low as 2 mol% can be employed. Use of industry-friendly solvents for this transformation as well as neat reaction conditions have been demonstrated. The scope of this transformation on a range of aldehydes and ketones is explored. Density Functional Theory computations reveal that the origins of enhanced diastereoselectivity is due to the presence of non-classical hydrogen bonds between the sulfonamide, the electrophile and the catalyst enamine that favor the major Anti-Re aldol TS in the Houk-List model. PMID:20932013

  9. Combining Two Methods of Sequence Definition in a Convergent Approach: Scalable Synthesis of Highly Defined and Multifunctionalized Macromolecules.

    PubMed

    Solleder, Susanne C; Martens, Steven; Espeel, Pieter; Du Prez, Filip; Meier, Michael A R

    2017-08-23

    The straightforward convergent synthesis of sequence-defined and multifunctionalized macromolecules is described herein. The first combination of two efficient approaches for the synthesis of sequence-defined macromolecules is reported: thiolactone chemistry and the Passerini three-component reaction (P-3CR). The thiolactone moiety was used as protecting group for the thiol, allowing the synthesis of a library of sequence-defined α,ω-functionalized building blocks. These building blocks were subsequently efficiently coupled to oligomers with carboxylic acid functionalities in a P-3CR. Thus, larger oligomers with molecular weights of up to 4629.73 g mol(-1) were obtained in gram quantities in a convergent approach along with the introduction of independently selectable side chains (up to 15), thus clearly demonstrating the high versatility and the efficiency of the reported approach. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Scalable still image coding based on wavelet

    NASA Astrophysics Data System (ADS)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  11. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  12. Scalable solvers and applications

    SciTech Connect

    Ribbens, C J

    2000-10-27

    The purpose of this report is to summarize research activities carried out under Lawrence Livermore National Laboratory (LLNL) research subcontract B501073. This contract supported the principal investigator (P1), Dr. Calvin Ribbens, during his sabbatical visit to LLNL from August 1999 through June 2000. Results and conclusions from the work are summarized below in two major sections. The first section covers contributions to the Scalable Linear Solvers and hypre projects in the Center for Applied Scientific Computing (CASC). The second section describes results from collaboration with Patrice Turchi of LLNL's Chemistry and Materials Science Directorate (CMS). A list of publications supported by this subcontract appears at the end of the report.

  13. Scalable optical quantum computer

    SciTech Connect

    Manykin, E A; Mel'nichenko, E V

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  14. Scalable optical quantum computer

    NASA Astrophysics Data System (ADS)

    Manykin, E. A.; Mel'nichenko, E. V.

    2014-12-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr3+, regularly located in the lattice of the orthosilicate (Y2SiO5) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications.

  15. Depth-specific optogenetic control in vivo with a scalable, high-density μLED neural probe

    NASA Astrophysics Data System (ADS)

    Scharf, Robert; Tsunematsu, Tomomi; McAlinden, Niall; Dawson, Martin D.; Sakata, Shuzo; Mathieson, Keith

    2016-06-01

    Controlling neural circuits is a powerful approach to uncover a causal link between neural activity and behaviour. Optogenetics has been widely adopted by the neuroscience community as it offers cell-type-specific perturbation with millisecond precision. However, these studies require light delivery in complex patterns with cellular-scale resolution, while covering a large volume of tissue at depth in vivo. Here we describe a novel high-density silicon-based microscale light-emitting diode (μLED) array, consisting of up to ninety-six 25 μm-diameter μLEDs emitting at a wavelength of 450 nm with a peak irradiance of 400 mW/mm2. A width of 100 μm, tapering to a 1 μm point, and a 40 μm thickness help minimise tissue damage during insertion. Thermal properties permit a set of optogenetic operating regimes, with ~0.5 °C average temperature increase. We demonstrate depth-dependent activation of mouse neocortical neurons in vivo, offering an inexpensive novel tool for the precise manipulation of neural activity.

  16. Scalable Nanoporous (Pt1-xNix)3Al Intermetallic Compounds as Highly Active and Stable Catalysts for Oxygen Electroreduction.

    PubMed

    Han, Gao-Feng; Gu, Lin; Lang, Xing-You; Xiao, Bei-Bei; Yang, Zhen-Zhong; Wen, Zi; Jiang, Qing

    2016-12-07

    Author: Bimetallic platinum-nickel (Pt-Ni) alloys as oxygen reduction reaction (ORR) electrocatalysts show genuine potential to boost widespread use of low-temperature fuel cells in vehicles by virtue of their high catalytic activity. However, their practical implementation encounters primary challenges in structural and catalytic durability caused by the low formation heat of Pt-Ni alloys. Here, we report nanoporous (NP) (Pt1-xNix)3Al intermetallic nanoparticles as oxygen electroreduction catalyst NP (Pt1-xNix)3Al, which circumvents this problem by making use of the extraordinarily negative formation heats of Pt-Al and Ni-Al bonds. The NP (Pt1-xNix)3Al nanocatalyst, which is mass-produced by alloying/dealloying and mechanical crushing technologies, exhibits specific activity of 3.6 mA cm(-2)Pt and mass activity of 2.4 A mg(-1)Pt at 0.90 V as a result of both ligand and compressive strain effects, while strong Ni-Al and Pt-Al bonds ensure their exceptional durability by alleviating evolution of Pt, Ni, and Al components and dissolutions of Ni and Al atoms.

  17. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  18. Depth-specific optogenetic control in vivo with a scalable, high-density μLED neural probe

    PubMed Central

    Scharf, Robert; Tsunematsu, Tomomi; McAlinden, Niall; Dawson, Martin D.; Sakata, Shuzo; Mathieson, Keith

    2016-01-01

    Controlling neural circuits is a powerful approach to uncover a causal link between neural activity and behaviour. Optogenetics has been widely adopted by the neuroscience community as it offers cell-type-specific perturbation with millisecond precision. However, these studies require light delivery in complex patterns with cellular-scale resolution, while covering a large volume of tissue at depth in vivo. Here we describe a novel high-density silicon-based microscale light-emitting diode (μLED) array, consisting of up to ninety-six 25 μm-diameter μLEDs emitting at a wavelength of 450 nm with a peak irradiance of 400 mW/mm2. A width of 100 μm, tapering to a 1 μm point, and a 40 μm thickness help minimise tissue damage during insertion. Thermal properties permit a set of optogenetic operating regimes, with ~0.5 °C average temperature increase. We demonstrate depth-dependent activation of mouse neocortical neurons in vivo, offering an inexpensive novel tool for the precise manipulation of neural activity. PMID:27334849

  19. Facile and scalable preparation of highly wear-resistance superhydrophobic surface on wood substrates using silica nanoparticles modified by VTES

    NASA Astrophysics Data System (ADS)

    Jia, Shanshan; Liu, Ming; Wu, Yiqiang; Luo, Sha; Qing, Yan; Chen, Haibo

    2016-11-01

    In this study, an efficient, facile method has been developed for fabricating superhydrophobic surfaces on wood substrates using silica nanoparticles modified by VTES. The as-prepared superhydrophobic wood surface had a water contact angle of 154° and water slide angle close to 0°. Simultaneously, this superhydrophobic wood showed highly durable and robust wear resistance when having undergone a long period of sandpaper abrasion or being scratched by a knife. Even under extreme conditions of boiling water, the superhydrophobicity of the as-prepared wood composite was preserved. Characterizations by scanning electron microscopy, energy-dispersive X-ray spectroscopy, and Fourier transform infrared spectroscopy showed that a typical and tough hierarchical micro/nanostructure was created on the wood substrate and vinyltriethoxysilane contributed to preventing the agglomeration of silica nanoparticles and serving as low-surface-free-energy substances. This superhydrophobic wood was easy to fabricate, mechanically resistant and exhibited long-term stability. Therefore, it is considered to be of significant importance in the industrial production of functional wood, especially for outdoor applications.

  20. Scalable integration of Li5FeO4 towards robust, high-performance lithium-ion hybrid capacitors.

    PubMed

    Park, Min-Sik; Lim, Young-Geun; Hwang, Soo Min; Kim, Jung Ho; Kim, Jeom-Soo; Dou, Shi Xue; Cho, Jaephil; Kim, Young-Jun

    2014-11-01

    Lithium-ion hybrid capacitors have attracted great interest due to their high specific energy relative to conventional electrical double-layer capacitors. Nevertheless, the safety issue still remains a drawback for lithium-ion capacitors in practical operational environments because of the use of metallic lithium. Herein, single-phase Li5FeO4 with an antifluorite structure that acts as an alternative lithium source (instead of metallic lithium) is employed and its potential use for lithium-ion capacitors is verified. Abundant Li(+) amounts can be extracted from Li5FeO4 incorporated in the positive electrode and efficiently doped into the negative electrode during the first electrochemical charging. After the first Li(+) extraction, Li(+) does not return to the Li5FeO4 host structure and is steadily involved in the electrochemical reactions of the negative electrode during subsequent cycling. Various electrochemical and structural analyses support its superior characteristics for use as a promising lithium source. This versatile approach can yield a sufficient Li(+)-doping efficiency of >90% and improved safety as a result of the removal of metallic lithium from the cell.

  1. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NASA Astrophysics Data System (ADS)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  2. Crickets Are Not a Free Lunch: Protein Capture from Scalable Organic Side-Streams via High-Density Populations of Acheta domesticus

    PubMed Central

    Lundy, Mark E.; Parrella, Michael P.

    2015-01-01

    It has been suggested that the ecological impact of crickets as a source of dietary protein is less than conventional forms of livestock due to their comparatively efficient feed conversion and ability to consume organic side-streams. This study measured the biomass output and feed conversion ratios of house crickets (Acheta domesticus) reared on diets that varied in quality, ranging from grain-based to highly cellulosic diets. The measurements were made at a much greater population scale and density than any previously reported in the scientific literature. The biomass accumulation was strongly influenced by the quality of the diet (p<0.001), with the nitrogen (N) content, the ratio of N to acid detergent fiber (ADF) content, and the crude fat (CF) content (y=N/ADF+CF) explaining most of the variability between feed treatments (p = 0.02; R2 = 0.96). In addition, for populations of crickets that were able to survive to a harvestable size, the feed conversion ratios measured were higher (less efficient) than those reported from studies conducted at smaller scales and lower population densities. Compared to the industrial-scale production of chickens, crickets fed a poultry feed diet showed little improvement in protein conversion efficiency, a key metric in determining the ecological footprint of grain-based livestock protein. Crickets fed the solid filtrate from food waste processed at an industrial scale via enzymatic digestion were able to reach a harvestable size and achieve feed and protein efficiencies similar to that of chickens. However, crickets fed minimally-processed, municipal-scale food waste and diets composed largely of straw experienced >99% mortality without reaching a harvestable size. Therefore, the potential for A. domesticus to sustainably supplement the global protein supply, beyond what is currently produced via grain-fed chickens, will depend on capturing regionally scalable organic side-streams of relatively high-quality that are not

  3. Precise Perforation and Scalable Production of Si Particles from Low-Grade Sources for High-Performance Lithium Ion Battery Anodes.

    PubMed

    Zong, Linqi; Jin, Yan; Liu, Chang; Zhu, Bin; Hu, Xiaozhen; Lu, Zhenda; Zhu, Jia

    2016-11-09

    Alloy anodes, particularly silicon, have been intensively pursued as one of the most promising anode materials for the next generation lithium-ion battery primarily because of high specific capacity (>4000 mAh/g) and elemental abundance. In the past decade, various nanostructures with porosity or void space designs have been demonstrated to be effective to accommodate large volume expansion (∼300%) and to provide stable solid electrolyte interphase (SEI) during electrochemical cycling. However, how to produce these building blocks with precise morphology control at large scale and low cost remains a challenge. In addition, most of nanostructured silicon suffers from poor Coulombic efficiency due to a large surface area and Li ion trapping at the surface coating. Here we demonstrate a unique nanoperforation process, combining modified ball milling, annealing, and acid treating, to produce porous Si with precise and continuous porosity control (from 17% to 70%), directly from low cost metallurgical silicon source (99% purity, ∼ $1/kg). The produced porous Si coated with graphene by simple ball milling can deliver a reversible specific capacity of 1250 mAh/g over 1000 cycles at the rate of 1C, with Coulombic efficiency of first cycle over 89.5%. The porous networks also provide efficient ion and electron pathways and therefore enable excellent rate performance of 880 mAh/g at the rate of 5C. Being able to produce particles with precise porosity control through scalable processes from low-grade materials, it is expected that this nanoperforation may play a role in the next generation lithium ion battery anodes, as well as many other potential applications such as optoelectronics and thermoelectrics.

  4. Scalable IP switching based on optical interconnect

    NASA Astrophysics Data System (ADS)

    Luo, Zhixiang; Cao, Mingcui; Liu, Erwu

    2000-10-01

    IP traffic on the Internet and enterprise networks has been growing exponentially in the last several years, and much attention is being focused on the use of IP multicast for real-time multimedia applications. The current soft and general-purpose CPU-based routers face great stress since they have great latency and low forwarding speeds. Based on the ASICs, layer 2 switching provides high-speed packet forwarding. Integrating high-speed of Layer 2 switching with the flexibility of Layer 3 routing, Layer 3 switching (IP switching) has been put forward in order to avoid the performance bottleneck associated with Layer 3 forwarding. In this paper, we present a prototype system of a scalable IP switching based on scalable ATM switching fabric and optical interconnect. The IP switching system mainly consists of the input/output interface unit, scalable ATM switching fabric and IP control component. Optical interconnects between the input fan-out stage and the interconnect stage, also the interconnect stage and the output concentration stage provide high-speed data paths. And the interconnect stage is composed of 16 X 16 CMOS-SEED ATM switching modules. With 64 ports of OC-12 interface, the maximum throughput of the prototype system is about 20 million packets per second (MPPS) for 256 bytes average packet length, and the packet loss ratio is less than 10e-9. Benefiting from the scalable architecture and the optical interconnect, this IP switching system can easily scale to very large network size.

  5. Novel Scalable 3-D MT Inverse Solver

    NASA Astrophysics Data System (ADS)

    Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.

    2016-12-01

    We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.

  6. Novel micro-bioreactor high throughput technology for cell culture process development: Reproducibility and scalability assessment of fed-batch CHO cultures.

    PubMed

    Amanullah, Ashraf; Otero, Jose Manuel; Mikola, Mark; Hsu, Amy; Zhang, Jinyou; Aunins, John; Schreyer, H Brett; Hope, James A; Russo, A Peter

    2010-05-01

    With increasing timeline pressures to get therapeutic and vaccine candidates into the clinic, resource intensive approaches such as the use of shake flasks and bench-top bioreactors may limit the design space for experimentation to yield highly productive processes. The need to conduct large numbers of experiments has resulted in the use of miniaturized high-throughput (HT) technology for process development. One such high-throughput system is the SimCell platform, a robotically driven, cell culture bioreactor system developed by BioProcessors Corp. This study describes the use of the SimCell micro-bioreactor technology for fed-batch cultivation of a GS-CHO transfectant expressing a model IgG4 monoclonal antibody. Cultivations were conducted in gas-permeable chambers based on a micro-fluidic design, with six micro-bioreactors (MBs) per micro-bioreactor array (MBA). Online, non-invasive measurement of total cell density, pH and dissolved oxygen (DO) was performed. One hundred fourteen parallel MBs (19 MBAs) were employed to examine process reproducibility and scalability at shake flask, 3- and 100-L bioreactor scales. The results of the study demonstrate that the SimCell platform operated under fed-batch conditions could support viable cell concentrations up to least 12 x 10(6) cells/mL. In addition, both intra-MB (MB to MB) as well as intra-MBA (MBA to MBA) culture performance was found to be highly reproducible. The intra-MB and -MBA variability was calculated for each measurement as the coefficient of variation defined as CV (%) = (standard deviation/mean) x 100. The % CV values for most intra-MB and intra-MBA measurements were generally under 10% and the intra-MBA values were slightly lower than those for intra-MB. Cell growth, process parameters, metabolic and protein titer profiles were also compared to those from shake flask, bench-top, and pilot scale bioreactor cultivations and found to be within +/-20% of the historical averages.

  7. Medusa: A Scalable MR Console Using USB

    PubMed Central

    Stang, Pascal P.; Conolly, Steven M.; Santos, Juan M.; Pauly, John M.; Scott, Greig C.

    2012-01-01

    MRI pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multi-channel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the Universal Serial Bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multi-channel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications. PMID:21954200

  8. Medusa: a scalable MR console using USB.

    PubMed

    Stang, Pascal P; Conolly, Steven M; Santos, Juan M; Pauly, John M; Scott, Greig C

    2012-02-01

    Magnetic resonance imaging (MRI) pulse sequence consoles typically employ closed proprietary hardware, software, and interfaces, making difficult any adaptation for innovative experimental technology. Yet MRI systems research is trending to higher channel count receivers, transmitters, gradient/shims, and unique interfaces for interventional applications. Customized console designs are now feasible for researchers with modern electronic components, but high data rates, synchronization, scalability, and cost present important challenges. Implementing large multichannel MR systems with efficiency and flexibility requires a scalable modular architecture. With Medusa, we propose an open system architecture using the universal serial bus (USB) for scalability, combined with distributed processing and buffering to address the high data rates and strict synchronization required by multichannel MRI. Medusa uses a modular design concept based on digital synthesizer, receiver, and gradient blocks, in conjunction with fast programmable logic for sampling and synchronization. Medusa is a form of synthetic instrument, being reconfigurable for a variety of medical/scientific instrumentation needs. The Medusa distributed architecture, scalability, and data bandwidth limits are presented, and its flexibility is demonstrated in a variety of novel MRI applications.

  9. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2007-12-04

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  10. Engineering scalable biological systems

    PubMed Central

    2010-01-01

    Synthetic biology is focused on engineering biological organisms to study natural systems and to provide new solutions for pressing medical, industrial and environmental problems. At the core of engineered organisms are synthetic biological circuits that execute the tasks of sensing inputs, processing logic and performing output functions. In the last decade, significant progress has been made in developing basic designs for a wide range of biological circuits in bacteria, yeast and mammalian systems. However, significant challenges in the construction, probing, modulation and debugging of synthetic biological systems must be addressed in order to achieve scalable higher-complexity biological circuits. Furthermore, concomitant efforts to evaluate the safety and biocontainment of engineered organisms and address public and regulatory concerns will be necessary to ensure that technological advances are translated into real-world solutions. PMID:21468204

  11. Optimized scalable network switch

    DOEpatents

    Blumrich, Matthias A.; Chen, Dong; Coteus, Paul W.

    2010-02-23

    In a massively parallel computing system having a plurality of nodes configured in m multi-dimensions, each node including a computing device, a method for routing packets towards their destination nodes is provided which includes generating at least one of a 2m plurality of compact bit vectors containing information derived from downstream nodes. A multilevel arbitration process in which downstream information stored in the compact vectors, such as link status information and fullness of downstream buffers, is used to determine a preferred direction and virtual channel for packet transmission. Preferred direction ranges are encoded and virtual channels are selected by examining the plurality of compact bit vectors. This dynamic routing method eliminates the necessity of routing tables, thus enhancing scalability of the switch.

  12. Scalable and Cost-Effective Synthesis of Highly Efficient Fe2N-Based Oxygen Reduction Catalyst Derived from Seaweed Biomass.

    PubMed

    Liu, Long; Yang, Xianfeng; Ma, Na; Liu, Haitao; Xia, Yanzhi; Chen, Chengmeng; Yang, Dongjiang; Yao, Xiangdong

    2016-03-09

    A simple and scalable synthesis of a 3D Fe2N-based nanoaerogel is reported with superior oxygen reduction reaction activity from waste seaweed biomass, addressed the growing energy scarcity. The merits are due to the synergistic effect of the 3D porous hybrid aerogel support with excellent electrical conductivity, convenient mass transport and O2 adsorption, and core/shell structured Fe2N/N-doped amorphous carbon nanoparticles.

  13. Customer oriented SNR scalability scheme for scalable video coding

    NASA Astrophysics Data System (ADS)

    Li, Z. G.; Rahardja, S.

    2005-07-01

    Let the whole region be the whole bit rate range that customers are interested in, and a sub-region be a specific bit rate range. The weighting factor of each sub-region is determined according to customers' interest. A new type of region of interest (ROI) is defined for the SNR scalability as the gap between the coding efficiency of SNR scalability scheme and that of the state-of-the-art single layer coding for a sub-region is a monotonically non-increasing function of its weighting factor. This type of ROI is used as a performance index to design a customer oriented SNR scalability scheme. Our scheme can be used to achieve an optimal customer oriented scalable tradeoff (COST). The profit can thus be maximized.

  14. Scalable Nonlinear Compact Schemes

    SciTech Connect

    Ghosh, Debojyoti; Constantinescu, Emil M.; Brown, Jed

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  15. Scalable SCPPM Decoder

    NASA Technical Reports Server (NTRS)

    Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.

    2012-01-01

    A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.

  16. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    SciTech Connect

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  17. Application of the FETI Method to ASCI Problems: Scalability Results on a Thousand-Processors and Discussion of Highly Heterogeneous Problems

    SciTech Connect

    Bhardwaj, M.; Day, D.; Farhat, C.; Lesoinne, M.; Pierson, K; Rixen, D.

    1999-04-01

    We report on the application of the one-level FETI method to the solution of a class of structural problems associated with the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). We focus on numerical and parallel scalability issues,and discuss the treatment by FETI of severe structural heterogeneities. We also report on preliminary performance results obtained on the ASCI Option Red supercomputer configured with as many as one thousand processors, for problems with as many as 5 million degrees of freedom.

  18. Architecture Knowledge for Evaluating Scalable Databases

    DTIC Science & Technology

    2015-01-16

    anurgali@andrew.cmu.edu Abstract—Designing massively scalable, highly available big data systems is an immense challenge for software architects. Big ...commercial technologies that can provide the required quality attributes. In big data systems, the data management layer presents unique engineering...features by software architects. QuABaseBD links the taxonomy to general quality attribute scenarios and design tactics for big data systems. This

  19. A Scalability Model for ECS's Data Server

    NASA Technical Reports Server (NTRS)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  20. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B. ); Kristiansen, E. )

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  1. Scalable coherent interface: Links to the future

    SciTech Connect

    Gustavson, D.B.; Kristiansen, E.

    1991-11-01

    Now that the Scalable Coherent Interface (SCI) has solved the bandwidth problem, what can we use it for? SCI was developed to support closely coupled multiprocessors and their caches in a distributed shared-memory environment, but its scalability and the efficient generality of its architecture make it work very well over a wide range of applications. It can replace a local area network for connecting workstations on a campus. It can be powerful I/O channel for a supercomputer. It can be the processor-cache-memory-I/O connection in a highly parallel computer. It can gather data from enormous particle detectors and distribute it among thousands of processors. It can connect a desktop microprocessor to memory chips a few millimeters away, disk drivers a few meters away, and servers a few kilometers away.

  2. Design and implementation of scalable tape archiver

    NASA Technical Reports Server (NTRS)

    Nemoto, Toshihiro; Kitsuregawa, Masaru; Takagi, Mikio

    1996-01-01

    In order to reduce costs, computer manufacturers try to use commodity parts as much as possible. Mainframes using proprietary processors are being replaced by high performance RISC microprocessor-based workstations, which are further being replaced by the commodity microprocessor used in personal computers. Highly reliable disks for mainframes are also being replaced by disk arrays, which are complexes of disk drives. In this paper we try to clarify the feasibility of a large scale tertiary storage system composed of 8-mm tape archivers utilizing robotics. In the near future, the 8-mm tape archiver will be widely used and become a commodity part, since recent rapid growth of multimedia applications requires much larger storage than disk drives can provide. We designed a scalable tape archiver which connects as many 8-mm tape archivers (element archivers) as possible. In the scalable archiver, robotics can exchange a cassette tape between two adjacent element archivers mechanically. Thus, we can build a large scalable archiver inexpensively. In addition, a sophisticated migration mechanism distributes frequently accessed tapes (hot tapes) evenly among all of the element archivers, which improves the throughput considerably. Even with the failures of some tape drives, the system dynamically redistributes hot tapes to the other element archivers which have live tape drives. Several kinds of specially tailored huge archivers are on the market, however, the 8-mm tape scalable archiver could replace them. To maintain high performance in spite of high access locality when a large number of archivers are attached to the scalable archiver, it is necessary to scatter frequently accessed cassettes among the element archivers and to use the tape drives efficiently. For this purpose, we introduce two cassette migration algorithms, foreground migration and background migration. Background migration transfers cassettes between element archivers to redistribute frequently accessed

  3. Memory Scalability and Efficiency Analysis of Parallel Codes

    SciTech Connect

    Janjusic, Tommy; Kartsaklis, Christos

    2015-01-01

    Memory scalability is an enduring problem and bottleneck that plagues many parallel codes. Parallel codes designed for High Performance Systems are typically designed over the span of several, and in some instances 10+, years. As a result, optimization practices which were appropriate for earlier systems may no longer be valid and thus require careful optimization consideration. Specifically, parallel codes whose memory footprint is a function of their scalability must be carefully considered for future exa-scale systems. In this paper we present a methodology and tool to study the memory scalability of parallel codes. Using our methodology we evaluate an application s memory footprint as a function of scalability, which we coined memory efficiency, and describe our results. In particular, using our in-house tools we can pinpoint the specific application components which contribute to the application s overall memory foot-print (application data- structures, libraries, etc.).

  4. DISP: Optimizations towards Scalable MPI Startup

    SciTech Connect

    Fu, Huansong; Pophale, Swaroop S; Gorentla Venkata, Manjunath; Yu, Weikuan

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  5. Pursuing Scalability for hypre's Conceptual Interfaces

    SciTech Connect

    Falgout, R D; Jones, J E; Yang, U M

    2004-07-21

    The software library hypre provides high performance preconditioners and solvers for the solution of large, sparse linear systems on massively parallel computers as well as conceptual interfaces that allow users to access the library in the way they naturally think about their problems. These interfaces include a stencil-based structured interface (Struct); a semi-structured interface (semiStruct), which is appropriate for applications that are mostly structured, e.g. block structured grids, composite grids in structured adaptive mesh refinement applications, and overset grids; a finite element interface (FEI) for unstructured problems, as well as a conventional linear-algebraic interface (IJ). It is extremely important to provide an efficient, scalable implementation of these interfaces in order to support the scalable solvers of the library, especially when using tens of thousands of processors. This paper describes the data structures, parallel implementation and resulting performance of the IJ, Struct and semiStruct interfaces. It investigates their scalability, presents successes as well as pitfalls of some of the approaches and suggests ways of dealing with them.

  6. Libra: Scalable Load Balance Analysis

    SciTech Connect

    2009-09-16

    Libra is a tool for scalable analysis of load balance data from all processes in a parallel application. Libra contains an instrumentation module that collects model data from parallel applications and a parallel compression mechanism that uses distributed wavelet transforms to gather load balance model data in a scalable fashion. Data is output to files, and these files can be viewed in a GUI tool by Libra users. The GUI tool associates particular load balance data with regions for code, emabling users to view the load balance properties of distributed "slices" of their application code.

  7. A scalable method for the production of high-titer and high-quality adeno-associated type 9 vectors using the HSV platform

    PubMed Central

    Adamson-Small, Laura; Potter, Mark; Falk, Darin J; Cleaver, Brian; Byrne, Barry J; Clément, Nathalie

    2016-01-01

    Recombinant adeno-associated vectors based on serotype 9 (rAAV9) have demonstrated highly effective gene transfer in multiple animal models of muscular dystrophies and other neurological indications. Current limitations in vector production and purification have hampered widespread implementation of clinical candidate vectors, particularly when systemic administration is considered. In this study, we describe a complete herpes simplex virus (HSV)-based production and purification process capable of generating greater than 1 × 1014 rAAV9 vector genomes per 10-layer CellSTACK of HEK 293 producer cells, or greater than 1 × 105 vector genome per cell, in a final, fully purified product. This represents a 5- to 10-fold increase over transfection-based methods. In addition, rAAV vectors produced by this method demonstrated improved biological characteristics when compared to transfection-based production, including increased infectivity as shown by higher transducing unit-to-vector genome ratios and decreased total capsid protein amounts, shown by lower empty-to-full ratios. Together, this data establishes a significant improvement in both rAAV9 yields and vector quality. Further, the method can be readily adapted to large-scale good laboratory practice (GLP) and good manufacturing practice (GMP) production of rAAV9 vectors to enable preclinical and clinical studies and provide a platform to build on toward late-phases and commercial production. PMID:27222839

  8. Scalable and template-free synthesis of nanostructured Na{sub 1.08}V{sub 6}O{sub 15} as high-performance cathode material for lithium-ion batteries

    SciTech Connect

    Zheng, Shili; Wang, Xinran; Yan, Hong; Du, Hao; Zhang, Yi

    2016-09-15

    Highlights: • Nanostructured Na{sub 1.08}V{sub 6}O{sub 15} was synthesized through additive-free sol-gel process. • Prepared Na{sub 1.08}V{sub 6}O{sub 15} demonstrated high capacity and sufficient cycling stability. • The reaction temperature was optimized to allow scalable Na{sub 1.08}V{sub 6}O{sub 15} fabrication. - Abstract: Developing high-capacity cathode material with feasibility and scalability is still challenging for lithium-ion batteries (LIBs). In this study, a high-capacity ternary sodium vanadate compound, nanostructured NaV{sub 6}O{sub 15}, was template-free synthesized through sol-gel process with high producing efficiency. The as-prepared sample was systematically post-treated at different temperature and the post-annealing temperature was found to determine the cycling stability and capacity of NaV{sub 6}O{sub 15}. The well-crystallized one exhibited good electrochemical performance with a high specific capacity of 302 mAh g{sup −1} when cycled at current density of 0.03 mA g{sup −1}. Its relatively long-term cycling stability was characterized by the cell performance under the current density of 1 A g{sup −1}, delivering a reversible capacity of 118 mAh g{sup −1} after 300 cycles with 79% capacity retention and nearly 100% coulombic efficiency: all demonstrating its significant promise of proposed strategy for large-scale synthesis of NaV{sub 6}O{sub 15} as cathode with high-capacity and high energy density for LIBs.

  9. Statistical Scalability Analysis of Communication Operations in Distributed Applications

    SciTech Connect

    Vetter, J S; McCracken, M O

    2001-02-27

    Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype on several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for individual communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.

  10. Scalability study of solid xenon

    SciTech Connect

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  11. Scalable and balanced dynamic hybrid data assimilation

    NASA Astrophysics Data System (ADS)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  12. Benchmarking and parallel scalability of MANCINTAP, a Parallel High-Performance Tool For Neutron Activation Analysis in Complex 4D Scenarios

    NASA Astrophysics Data System (ADS)

    Firpo, G.; Frambati, S.; Frignani, M.; Gerra, G.

    2014-06-01

    MANCINTAP is a parallel computational tool developed by Ansaldo Nucleare to perform 4D neutron transport, activation and time-resolved dose-rate calculations in very complex geometries for CPU-intensive fission and fusion applications. MANCINTAP creates an automated link between the 3D radiation transport code MCNP5—which is used to evaluate both the neutron fluxes for activation calculations and the resulting secondary gamma dose rates—and the zero-dimensional activation code Anita2000 by handling crucial processes such as data exchange, determination of material mixtures and generation of cumulative probability distributions. A brief description of the computational tool is given here, with particular emphasis on the key technical choices underlying the project. Benchmarking of MANCINTAP has been performed in three steps: (i) against a very simplified model, where an analytical solution is available for comparison; (ii) against the well-established deterministic transport and activation code ATTILA and (iii) against experimental data obtained at the Frascati Neutron Generator (FNG) facility. An analysis of MANCINTAP scalability performances is proposed to demonstrate the robustness of its parallel structure, tailored for HPC applications, which makes it—to the best of our knowledge—a novel tool.

  13. Scalable Synthesis of (−)-Thapsigargin

    PubMed Central

    2016-01-01

    Total syntheses of the complex, highly oxygenated sesquiterpenes thapsigargin (1) and nortrilobolide (2) are presented. Access to analogues of these promising bioactive natural products has been limited to tedious isolation and semisynthetic efforts. Elegant prior total syntheses demonstrated the feasibility of creating these entitites in 36–42 step processes. The currently reported route proceeds in a scalable and more concise fashion by utilizing two-phase terpene synthesis logic. Salient features of the work include application of the classic photosantonin rearrangement and precisely choreographed installation of the multiple oxygenations present on the guaianolide skeleton. PMID:28149952

  14. Scalable Optical-Fiber Communication Networks

    NASA Technical Reports Server (NTRS)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  15. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  16. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  17. A Scalable Database Infrastructure

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Chayes, D. N.

    2001-12-01

    The rapidly increasing volume and complexity of MG&G data, and the growing demand from funding agencies and the user community that it be easily accessible, demand that we improve our approach to data management in order to reach a broader user-base and operate more efficient and effectively. We have chosen an approach based on industry-standard relational database management systems (RDBMS) that use community-wide data specifications, where there is a clear and well-documented external interface that allows use of general purpose as well as customized clients. Rapid prototypes assembled with this approach show significant advantages over the traditional, custom-built data management systems that often use "in-house" legacy file formats, data specifications, and access tools. We have developed an effective database prototype based a public domain RDBMS (PostgreSQL) and metadata standard (FGDC), and used it as a template for several ongoing MG&G database management projects - including ADGRAV (Antarctic Digital Gravity Synthesis), MARGINS, the Community Review system of the Digital Library for Earth Science Education, multibeam swath bathymetry metadata, and the R/V Maurice Ewing onboard acquisition system. By using standard formats and specifications, and working from a common prototype, we are able to reuse code and deploy rapidly. Rather than spend time on low-level details such as storage and indexing (which are built into the RDBMS), we can focus on high-level details such as documentation and quality control. In addition, because many commercial off-the-shelf (COTS) and public domain data browsers and visualization tools have built-in RDBMS support, we can focus on backend development and leave the choice of a frontend client(s) up to the end user. While our prototype is running under an open source RDBMS on a single processor host, the choice of standard components allows this implementation to scale to commercial RDBMS products and multiprocessor servers as

  18. Facile and Scalable Synthesis of Zn3V2O7(OH)2·2H2O Microflowers as a High-Performance Anode for Lithium-Ion Batteries.

    PubMed

    Yan, Haowu; Luo, Yanzhu; Xu, Xu; He, Liang; Tan, Jian; Li, Zhaohuai; Hong, Xufeng; He, Pan; Mai, Liqiang

    2017-08-23

    The employment of nanomaterials and nanotechnologies has been widely acknowledged as an effective strategy to enhance the electrochemical performance of lithium-ion batteries (LIBs). However, how to produce nanomaterials effectively on a large scale remains a challenge. Here, the highly crystallized Zn3V2O7(OH)2·2H2O is synthesized through a simple liquid phase method at room temperature in a large scale, which is easily realized in industry. Through suppressing the reaction dynamics with ethylene glycol, a uniform morphology of microflowers is obtained. Owing to the multiple reaction mechanisms (insertion, conversion, and alloying) during Li insertion/extraction, the prepared electrode delivers a remarkable specific capacity of 1287 mA h g(-1) at 0.2 A g(-1) after 120 cycles. In addition, a high capacity of 298 mA h g(-1) can be obtained at 5 A g(-1) after 1400 cycles. The excellent electrochemical performance can be attributed to the high crystallinity and large specific surface area of active materials. The smaller particles after cycling could facilitate the lithium-ion transport and provide more reaction sites. The facile and scalable synthesis process and excellent electrochemical performance make this material a highly promising anode for the commercial LIBs.

  19. Design and Analysis of Scalable Parallel Algorithms

    DTIC Science & Technology

    1993-11-15

    Journal of Parallel Programming, 20(2), 1991. 8 Conference Proceedings "* Anshul Gupta, Vipin Kumar and Ahmed Sameh . Performance and Scalability of Precon...Science, University of Minnesota, Minneapolis, 1993. o Anshul Gupta, Vipin Kumar and Ahmed Sameh . Performance and Scalability of Precondi- tioned...Ahmed Sameh . Performance and scalability of precondi- tioned conjugate gradient methods on parallel computers. Technical Report TR 92-64, Uni- versity

  20. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  1. Network selection, Information filtering and Scalable computation

    NASA Astrophysics Data System (ADS)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  2. Efficient scalable solid-state neutron detector

    NASA Astrophysics Data System (ADS)

    Moses, Daniel

    2015-06-01

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a 6Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m2, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  3. Efficient scalable solid-state neutron detector

    SciTech Connect

    Moses, Daniel

    2015-06-15

    We report on scalable solid-state neutron detector system that is specifically designed to yield high thermal neutron detection sensitivity. The basic detector unit in this system is made of a {sup 6}Li foil coupled to two crystalline silicon diodes. The theoretical intrinsic efficiency of a detector-unit is 23.8% and that of detector element comprising a stack of five detector-units is 60%. Based on the measured performance of this detector-unit, the performance of a detector system comprising a planar array of detector elements, scaled to encompass effective area of 0.43 m{sup 2}, is estimated to yield the minimum absolute efficiency required of radiological portal monitors used in homeland security.

  4. Scalable Sensor Data Processor: Development and Validation

    NASA Astrophysics Data System (ADS)

    Pinto, R.; Berrojo, L.; Garcia, E.; Trautner, R.; Rauwerda, G.; Sunesen, K.; Redant, S.; Thys, G.; Andersson, J.; Hernandez, F.; Habinc, S.; Lopez, J.

    2016-08-01

    Future science and robotic exploration missions are envisaged to be demanding w.r.t. on-board data processing capabilities, due to the scarcity of downlink bandwidth together with the massive amount of data which can be generated by next- generation instruments, both in terms of data rate and volume. Therefore, new architectures for on- board data processing are in need.The Scalable Sensor Data Processor (SSDP) is a next-generation heterogeneous multicore mixed- signal ASIC for on-board data processing, aiming at providing in a single chip the resources needed to perform data acquisition, control and high- performance processing.This paper presents the project background and design of the SSDP ASIC. The architecture of the control and processing subsystems are presented and detailed. The current status and future development activitiess are also presented, both with prototyping and envisaged testing and validation procedures.

  5. Scalable coding of encrypted images.

    PubMed

    Zhang, Xinpeng; Feng, Guorui; Ren, Yanli; Qian, Zhenxing

    2012-06-01

    This paper proposes a novel scheme of scalable coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. After decomposing the encrypted data into a downsampled subimage and several data sets with a multiple-resolution construction, an encoder quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams. At the receiver side, while a subimage is decrypted to provide the rough information of the original content, the quantized coefficients can be used to reconstruct the detailed content with an iteratively updating procedure. Because of the hierarchical coding mechanism, the principal original content with higher resolution can be reconstructed when more bitstreams are received.

  6. Scalable Performance Measurement and Analysis

    SciTech Connect

    Gamblin, Todd

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  7. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC (Superconducting Super Collider) detectors

    SciTech Connect

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C. ); Lockyer, N.; VanBerg, R. )

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab.

  8. Scalable complexity-distortion model for fast motion estimation

    NASA Astrophysics Data System (ADS)

    Yi, Xiaoquan; Ling, Nam

    2005-07-01

    Recently established international video coding standard H.264/AVC and the upcoming standard on scalable video coding (SVC) bring part of the solution to high compression ratio requirement and heterogeneity requirement. However, these algorithms have unbearable complexities for real-time encoding. Therefore, there is an important challenge to reduce encoding complexity, preferably in a scalable manner. Motion estimation and motion compensation techniques provide significant coding gain but are the most time-intensive parts in an encoder system. They present tremendous research challenges to design a flexible, rate-distortion optimized, yet computationally efficient encoder, especially for various applications. In this paper, we present a scalable motion estimation framework for complexitydistortion consideration. We propose a new progressive initial search (PIS) method to generate an accurate initial search point, followed by a fast search method, which can greatly benefit from the tighter bounds of the PIS. Such approach offers not only significant speedup but also an optimal distortion performance for a given complexity constrain. We analyze the relationship between computational complexity and distortion (C-D) through probabilistic distance measure extending from the complexity and distortion theory. A configurable complexity quantization parameter (Q) is introduced. Simulation results demonstrate that the proposed scalable complexity-distortion framework enables video encoder to conveniently adjust its complexity while providing best possible services.

  9. SWIFT-scalable clustering for automated identification of rare cell populations in large, high-dimensional flow cytometry datasets, part 2: biological evaluation.

    PubMed

    Mosmann, Tim R; Naim, Iftekhar; Rebhahn, Jonathan; Datta, Suprakash; Cavenaugh, James S; Weaver, Jason M; Sharma, Gaurav

    2014-05-01

    A multistage clustering and data processing method, SWIFT (detailed in a companion manuscript), has been developed to detect rare subpopulations in large, high-dimensional flow cytometry datasets. An iterative sampling procedure initially fits the data to multidimensional Gaussian distributions, then splitting and merging stages use a criterion of unimodality to optimize the detection of rare subpopulations, to converge on a consistent cluster number, and to describe non-Gaussian distributions. Probabilistic assignment of cells to clusters, visualization, and manipulation of clusters by their cluster medians, facilitate application of expert knowledge using standard flow cytometry programs. The dual problems of rigorously comparing similar complex samples, and enumerating absent or very rare cell subpopulations in negative controls, were solved by assigning cells in multiple samples to a cluster template derived from a single or combined sample. Comparison of antigen-stimulated and control human peripheral blood cell samples demonstrated that SWIFT could identify biologically significant subpopulations, such as rare cytokine-producing influenza-specific T cells. A sensitivity of better than one part per million was attained in very large samples. Results were highly consistent on biological replicates, yet the analysis was sensitive enough to show that multiple samples from the same subject were more similar than samples from different subjects. A companion manuscript (Part 1) details the algorithmic development of SWIFT.

  10. Development of a rapid high-efficiency scalable process for acetylated Sus scrofa cationic trypsin production from Escherichia coli inclusion bodies.

    PubMed

    Zhao, Mingzhi; Wu, Feilin; Xu, Ping

    2015-12-01

    Trypsin is one of the most important enzymatic tools in proteomics and biopharmaceutical studies. Here, we describe the complete recombinant expression and purification from a trypsinogen expression vector construct. The Sus scrofa cationic trypsin gene with a propeptide sequence was optimized according to Escherichia coli codon-usage bias and chemically synthesized. The gene was inserted into pET-11c plasmid to yield an expression vector. Using high-density E. coli fed-batch fermentation, trypsinogen was expressed in inclusion bodies at 1.47 g/L. The inclusion body was refolded with a high yield of 36%. The purified trypsinogen was then activated to produce trypsin. To address stability problems, the trypsin thus produced was acetylated. The final product was generated upon gel filtration. The final yield of acetylated trypsin was 182 mg/L from a 5-L fermenter. Our acetylated trypsin product demonstrated higher BAEE activity (30,100 BAEE unit/mg) than a commercial product (9500 BAEE unit/mg, Promega). It also demonstrated resistance to autolysis. This is the first report of production of acetylated recombinant trypsin that is stable and suitable for scale-up. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Scalable High-Performance Algorithm for the Simulation of Exciton Dynamics. Application to the Light-Harvesting Complex II in the Presence of Resonant Vibrational Modes.

    PubMed

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-09-09

    The accurate simulation of excitonic energy transfer in molecular complexes with coupled electronic and vibrational degrees of freedom is essential for comparing excitonic system parameters obtained from ab initio methods with measured time-resolved spectra. Several exact methods for computing the exciton dynamics within a density-matrix formalism are known but are restricted to small systems with less than 10 sites due to their computational complexity. To study the excitonic energy transfer in larger systems, we adapt and extend the exact hierarchical equation of motion (HEOM) method to various high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time scale of the transfer process. We investigate the impact of resonantly coupled vibrations on the relaxation and show that the transfer does not rely on a fine-tuning of specific modes.

  12. Line length scalable high power diode laser with power densities > 100kw/cm2 for industrial Si-annealing applications

    NASA Astrophysics Data System (ADS)

    Revermann, Markus; Bayer, Andreas; Meinschien, Jens

    2008-02-01

    We present newly developed high power diode laser modules which are performing at outstanding power densities and line uniformity. The combination of recently designed laser diode bars on passive heat sinks and optimized micro-optics results to laser modules with power densities > 100kW/cm2 in a line length of 12mm x 0.1mm. The usage of non periodic structured homogenizers leads to a homogeneity of less than 3% p/v which allows precise heating and annealing applications. The application for such laser lines are hardening, metallization and annealing of different materials. In the presentation we will show results of thin film Si-a annealing process with direct diode laser annealing.

  13. Scalable Video Transcaling for the Wireless Internet

    NASA Astrophysics Data System (ADS)

    Radha, Hayder; van der Schaar, Mihaela; Karande, Shirish

    2004-12-01

    The rapid and unprecedented increase in the heterogeneity of multimedia networks and devices emphasizes the need for scalable and adaptive video solutions both for coding and transmission purposes. However, in general, there is an inherent trade-off between the level of scalability and the quality of scalable video streams. In other words, the higher the bandwidth variation, the lower the overall video quality of the scalable stream that is needed to support the desired bandwidth range. In this paper, we introduce the notion of wireless video transcaling (TS), which is a generalization of (nonscalable) transcoding. With TS, a scalable video stream, that covers a given bandwidth range, is mapped into one or more scalable video streams covering different bandwidth ranges. Our proposed TS framework exploits the fact that the level of heterogeneity changes at different points of the video distribution tree over wireless and mobile Internet networks. This provides the opportunity to improve the video quality by performing the appropriate TS process. We argue that an Internet/wireless network gateway represents a good candidate for performing TS. Moreover, we describe hierarchical TS (HTS), which provides a "transcaler" with the option of choosing among different levels of TS processes with different complexities. We illustrate the benefits of TS by considering the recently developed MPEG-4 fine granularity scalability (FGS) video coding. Extensive simulation results of video TS over bit rate ranges supported by emerging wireless LANs are presented.

  14. Scalable synthesis of core-shell structured SiOx/nitrogen-doped carbon composite as a high-performance anode material for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Shi, Lu; Wang, Weikun; Wang, Anbang; Yuan, Keguo; Jin, Zhaoqing; Yang, Yusheng

    2016-06-01

    In this work, a novel core-shell structured SiOx/nitrogen-doped carbon composite has been prepared by simply dispersing the SiOx particles, which are synthesized by a thermal evaporation method from an equimolar mixture of Si and SiO2, into the dopamine solution, followed by a carbonization process. The SiOx core is well covered by the conformal and homogeneous nitrogen-doped carbon layer from the pyrolysis of polydopamine. By contrast with the bare SiOx, the electrochemical performance of the as-prepared core-shell structured SiOx/nitrogen-doped carbon composite has been improved significantly. It delivers a reversible capacity of 1514 mA h g-1 after 100 cycles at a current density of 100 mA g-1 and 933 mA h g-1 at 2 A g-1, much higher than those of commercial graphite anodes. The nitrogen-doped carbon layer ensures the excellent electrochemical performance of the SiOx/C composite. In addition, since dopamine can self-polymerize and coat virtually any surface, this versatile, facile and highly efficient coating process may be widely applicable to obtain various composites with uniform nitrogen-doped carbon coating layer.

  15. Sustainable Engineering and Improved Recycling of PET for High-Value Applications: Transforming Linear PET to Lightly Branched PET with a Novel, Scalable Process

    NASA Astrophysics Data System (ADS)

    Pierre, Cynthia; Torkelson, John

    2009-03-01

    A major challenge for the most effective recycling of poly(ethylene terephthalate) concerns the fact that initial melt processing of PET into a product leads to substantial degradation of molecular weight. Thus, recycled PET has insufficient melt viscosity for reuse in high-value applications such as melt-blowing of PET bottles. Academic and industrial research has tried to remedy this situation by synthesis and use of ``chain extenders'' that can lead to branched PET (with higher melt viscosity than the linear recycled PET) via condensation reactions with functional groups on the PET. Here we show that simple processing of PET via solid-state shear pulverization (SSSP) leads to enhanced PET melt viscosity without need for chemical additives. We hypothesize that this branching results from low levels of chain scission accompanying SSSP, leading to formation of polymeric radicals that participate in chain transfer and combination reactions with other PET chains and thereby to in situ branch formation. The pulverized PET exhibits vastly enhanced crystallization kinetics, eliminating the need to employ cold crystallization to achieve maximum PET crystallinity. Results of SSSP processing of PET will be compared to results obtained with poly(butylene terephthalate).

  16. Highly efficient blue organic light emitting device using indium-free transparent anode Ga:ZnO with scalability for large area coating

    SciTech Connect

    Wang, Liang; Matson, Dean W.; Polikarpov, Evgueni; Swensen, James S.; Bonham, Charles C.; Cosimbescu, Lelia; Berry, J. J.; Ginley, D. S.; Gaspar, Daniel J.; Padmaperuma, Asanga B.

    2010-02-15

    The availability of economically-produced and environmentally-stable transparent conductive oxide (TCO) coatings is critical for the development of a variety of electronic devices requiring transparent electrodes. Such devices include liquid crystal display pixels and organic light emitting diodes (OLEDs),[1, 2] solar cell applications,[3, 4] and electrically heated windows.[5, 6] The materials fulfilling these requirements are usually wide band gap inorganic transparent conductive oxides (TCOs). Tin-doped indium oxide, or ITO, has traditionally been used for electronic TCO applications because of its low resistivity, high work function and transparency. Due to the increasing cost and limited supply of indium and its tendency to migrate in to the device, there has been increasing research interest to substitute ITO with an indium-free material. A number of alternative metal oxides and doped oxides have been evaluated as TCO materials with varying degrees of success.[7, 8] Among these alternatives to ITO, gallium-doped zinc oxide (GZO) [2, 9] and aluminium-doped zinc oxide (AZO) [10, 11] have drawn particular attention. These materials have been demonstrated to have resistivities and transparencies approaching those of the best ITO, low toxicity, and much lower materials cost. Although AZO is attractive as a TCO electrode material, GZO features a greater resistance to oxidation as a result of gallium’s greater electronegativity compared to Submitted to 2 aluminum.[12, 13

  17. Bright conjugated polymer nanoparticles containing a biodegradable shell produced at high yields and with tuneable optical properties by a scalable microfluidic device.

    PubMed

    Abelha, T F; Phillips, T W; Bannock, J H; Nightingale, A M; Dreiss, C A; Kemal, E; Urbano, L; deMello, J C; Green, M; Dailey, L A

    2017-02-02

    This study compares the performance of a microfluidic technique and a conventional bulk method to manufacture conjugated polymer nanoparticles (CPNs) embedded within a biodegradable poly(ethylene glycol) methyl ether-block-poly(lactide-co-glycolide) (PEG5K-PLGA55K) matrix. The influence of PEG5K-PLGA55K and conjugated polymers cyano-substituted poly(p-phenylene vinylene) (CN-PPV) and poly(9,9-dioctylfluorene-2,1,3-benzothiadiazole) (F8BT) on the physicochemical properties of the CPNs was also evaluated. Both techniques enabled CPN production with high end product yields (∼70-95%). However, while the bulk technique (solvent displacement) under optimal conditions generated small nanoparticles (∼70-100 nm) with similar optical properties (quantum yields ∼35%), the microfluidic approach produced larger CPNs (140-260 nm) with significantly superior quantum yields (49-55%) and tailored emission spectra. CPNs containing CN-PPV showed smaller size distributions and tuneable emission spectra compared to F8BT systems prepared under the same conditions. The presence of PEG5K-PLGA55K did not affect the size or optical properties of the CPNs and provided a neutral net electric charge as is often required for biomedical applications. The microfluidics flow-based device was successfully used for the continuous preparation of CPNs over a 24 hour period. On the basis of the results presented here, it can be concluded that the microfluidic device used in this study can be used to optimize the production of bright CPNs with tailored properties with good reproducibility.

  18. Fully scalable video coding with packed stream

    NASA Astrophysics Data System (ADS)

    Lopez, Manuel F.; Rodriguez, Sebastian G.; Ortiz, Juan Pablo; Dana, Jose Miguel; Ruiz, Vicente G.; Garcia, Inmaculada

    2005-03-01

    Scalable video coding is a technique which allows a compressed video stream to be decoded in several different ways. This ability allows a user to adaptively recover a specific version of a video depending on its own requirements. Video sequences have temporal, spatial and quality scalabilities. In this work we introduce a novel fully scalable video codec. It is based on a motion-compensated temporal filtering (MCTF) of the video sequences and it uses some of the basic elements of JPEG 2000. This paper describes several specific proposals for video on demand and video-conferencing applications over non-reliable packet-switching data networks.

  19. Scalable architecture in mammalian brains.

    PubMed

    Clark, D A; Mitra, P P; Wang, S S

    2001-05-10

    Comparison of mammalian brain parts has often focused on differences in absolute size, revealing only a general tendency for all parts to grow together. Attempts to find size-independent effects using body weight as a reference variable obscure size relationships owing to independent variation of body size and give phylogenies of questionable significance. Here we use the brain itself as a size reference to define the cerebrotype, a species-by-species measure of brain composition. With this measure, across many mammalian taxa the cerebellum occupies a constant fraction of the total brain volume (0.13 +/- 0.02), arguing against the hypothesis that the cerebellum acts as a computational engine principally serving the neocortex. Mammalian taxa can be well separated by cerebrotype, thus allowing the use of quantitative neuroanatomical data to test evolutionary relationships. Primate cerebrotypes have progressively shifted and neocortical volume fractions have become successively larger in lemurs and lorises, New World monkeys, Old World monkeys, and hominoids, lending support to the idea that primate brain architecture has been driven by directed selection pressure. At the same time, absolute brain size can vary over 100-fold within a taxon, while maintaining a relatively uniform cerebrotype. Brains therefore constitute a scalable architecture.

  20. Scalable encryption using alpha rooting

    NASA Astrophysics Data System (ADS)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  1. Embedded High Performance Scalable Computing Systems

    DTIC Science & Technology

    2003-11-01

    a network viewpoint, a data set comprises S data slots of variable length with a fixed 32-bit word width. A data set is partitioned into slots for...resource. An attach is simply a message passed from the resource to the network interface controller. This is a fixed time of 266 us, independent of the...relying on firmware modifications at most for new resources or functionality. Network Protocol Interface Network Protocol Interface Data Movement

  2. Generic algorithms for high performance scalable geocomputing

    NASA Astrophysics Data System (ADS)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.

  3. Load balancing techniques for scalable web servers

    NASA Astrophysics Data System (ADS)

    Bryhni, Haakon; Klovning, Espen; Kure, Oivind

    1998-10-01

    Scalable web servers can be built using a Network of Workstations (NOW) where server capability can be added by adding new workstations as the workload increases. The task of load balancing Hyper Text Transfer Protocol traffic to scalable web servers is the topic of this paper. We present a classification framework for scalable web servers, and present simulations of a clustered web server. The cluster communication is modeled using a detailed, verified model of TCP/IP processing over Asynchronous Transfer Mode. The simulator is a trace driven discrete even simulator, and the traces are obtained from the proxy server of a large Internet Service Provider in Norway. Various load balancing schemes are simulated for Robin load balancing policy implemented in a modified router gives better average response time and better load balancing than the Rotating Nameserver method used in current scalable web servers.

  4. Efficient entropy coding for scalable video coding

    NASA Astrophysics Data System (ADS)

    Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo

    2005-10-01

    The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.

  5. The Co Design Architecture for Exascale Systems, a Novel Approach for Scalable Designs

    SciTech Connect

    Kagan, Michael; Shainer, Gilad; Poole, Stephen W; Shamis, Pavel; Wilde, Todd; Pak, Lui; Liu, Tong; Dubman, Mike; Shahar, Yiftah; Graham, Richard L

    2012-01-01

    High performance computing (HPC) has begun scaling beyond the Petaflop range towards the Exaflop (1000 Petaflops) mark. One of the major concerns throughout the development toward such performance capability is scalability both at the system level and the application layer. In this paper we present a novel approach for a new design concept the Co Design approach with enables a tighter development of both the application communication libraries and the underlying hardware interconnect solution in order to overcome scalability issues and to enable a more efficient design approach towards Exascale computing. We have suggested a new application programing interface and have demonstrated a 50x improvement of performance and scalability increases.

  6. A Scalable Segmented Decision Tree Abstract Domain

    NASA Astrophysics Data System (ADS)

    Cousot, Patrick; Cousot, Radhia; Mauborgne, Laurent

    The key to precision and scalability in all formal methods for static program analysis and verification is the handling of disjunctions arising in relational analyses, the flow-sensitive traversal of conditionals and loops, the context-sensitive inter-procedural calls, the interleaving of concurrent threads, etc. Explicit case enumeration immediately yields to combinatorial explosion. The art of scalable static analysis is therefore to abstract disjunctions to minimize cost while preserving weak forms of disjunctions for expressivity.

  7. Performance and Scalability Evaluation of the Ceph Parallel File System

    SciTech Connect

    Wang, Feiyi; Nelson, Mark; Oral, H Sarp; Settlemyer, Bradley W; Atchley, Scott; Caldwell, Blake A; Hill, Jason J

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  8. Equalizer: a scalable parallel rendering framework.

    PubMed

    Eilemann, Stefan; Makhinya, Maxim; Pajarola, Renato

    2009-01-01

    Continuing improvements in CPU and GPU performances as well as increasing multi-core processor and cluster-based parallelism demand for flexible and scalable parallel rendering solutions that can exploit multipipe hardware accelerated graphics. In fact, to achieve interactive visualization, scalable rendering systems are essential to cope with the rapid growth of data sets. However, parallel rendering systems are non-trivial to develop and often only application specific implementations have been proposed. The task of developing a scalable parallel rendering framework is even more difficult if it should be generic to support various types of data and visualization applications, and at the same time work efficiently on a cluster with distributed graphics cards. In this paper we introduce a novel system called Equalizer, a toolkit for scalable parallel rendering based on OpenGL which provides an application programming interface (API) to develop scalable graphics applications for a wide range of systems ranging from large distributed visualization clusters and multi-processor multipipe graphics systems to single-processor single-pipe desktop machines. We describe the system architecture, the basic API, discuss its advantages over previous approaches, present example configurations and usage scenarios as well as scalability results.

  9. Efficient Byzantine Fault Tolerance for Scalable Storage and Services

    DTIC Science & Technology

    2009-07-01

    k O p s/ se c) No Redundancy Zzyzx-noPQ Zzyzx Zyzzyva (B=10) Zyzzyva (B=1) Zzyz x +f +1 Figure 5.5.6: Throughput vs. client processes when f = 1 and...28 ix x CONTENTS 3.4.4 Linearizability and Immediate Recovery...need only the minimal number of responsive servers to ensure high throughput, provide single roundtrip latency, and provide scalability through

  10. Scalable Anonymous Group Communication in the Anytrust Model

    DTIC Science & Technology

    2012-04-10

    nets messaging phase was high and not a significant improvement over the shuffle alone. Herbivore [31] makes low latency guar- antees (100s of...practical anonymity systems such as Tor [16] or Herbivore [31], where a small number of “wrong” choices—e.g., the choice of entry and exit relay in Tor—can...of-service attacks makes them largely impractical. Herbivore [31] attempts to make DC-nets more scalable, but it provides unconditional anonymity only

  11. Scalable graphene field-effect sensors for specific protein detection.

    PubMed

    Saltzgaber, Grant; Wojcik, Peter; Sharf, Tal; Leyden, Matthew R; Wardini, Jenna L; Heist, Christopher A; Adenuga, Adeniyi A; Remcho, Vincent T; Minot, Ethan D

    2013-09-06

    We demonstrate that micron-scale graphene field-effect transistor biosensors can be fabricated in a scalable fashion from large-area chemical vapor deposition derived graphene. We electrically detect the real-time binding and unbinding of a protein biomarker, thrombin, to and from aptamer-coated graphene surfaces. Our sensors have low background noise and high transconductance, comparable to exfoliated graphene devices. The devices are reusable and have a shelf-life greater than one week.

  12. Scalable Solutions for Interactive Virtual Humans that can Manipulate Objects

    DTIC Science & Technology

    2005-01-01

    A scalable approach is therefore sought for addressing such different requirements in an unified framework. Related Work Only few animation frameworks... animation of human grasping using forward and in- verse kinematics. Computer & Graphics 23:145–154. Baerlocher, P., and Boulic, R. 1998. Task-priority...formu- lations for the kinematic control of highly redundant artic - ulated structures. In Proceedings of IEEE IROS’98, 323– 329. Baerlocher, P. 2001

  13. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  14. Scalable Silicon Nanostructuring for Thermoelectric Applications

    NASA Astrophysics Data System (ADS)

    Koukharenko, E.; Boden, S. A.; Platzek, D.; Bagnall, D. M.; White, N. M.

    2013-07-01

    The current limitations of commercially available thermoelectric (TE) generators include their incompatibility with human body applications due to the toxicity of commonly used alloys and possible future shortage of raw materials (Bi-Sb-Te and Se). In this respect, exploiting silicon as an environmentally friendly candidate for thermoelectric applications is a promising alternative since it is an abundant, ecofriendly semiconductor for which there already exists an infrastructure for low-cost and high-yield processing. Contrary to the existing approaches, where n/ p-legs were either heavily doped to an optimal carrier concentration of 1019 cm-3 or morphologically modified by increasing their roughness, in this work improved thermoelectric performance was achieved in smooth silicon nanostructures with low doping concentration (1.5 × 1015 cm-3). Scalable, highly reproducible e-beam lithographies, which are compatible with nanoimprint and followed by deep reactive-ion etching (DRIE), were employed to produce arrays of regularly spaced nanopillars of 400 nm height with diameters varying from 140 nm to 300 nm. A potential Seebeck microprobe (PSM) was used to measure the Seebeck coefficients of such nanostructures. This resulted in values ranging from -75 μV/K to -120 μV/K for n-type and 100 μV/K to 140 μV/K for p-type, which are significant improvements over previously reported data.

  15. Scalable hybrid computation with spikes.

    PubMed

    Sarpeshkar, Rahul; O'Halloran, Micah

    2002-09-01

    We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. And, third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. And, third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be

  16. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  17. Scalable, full-colour and controllable chromotropic plasmonic printing

    PubMed Central

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  18. Scalable Fourier transform system for instantly structured illumination in lithography.

    PubMed

    Ye, Yan; Xu, Fengchuan; Wei, Guojun; Xu, Yishen; Pu, Donglin; Chen, Linsen; Huang, Zhiwei

    2017-05-15

    We report the development of a unique scalable Fourier transform 4-f system for instantly structured illumination in lithography. In the 4-f system, coupled with a 1-D grating and a phase retarder, the ±1st order of diffracted light from the grating serve as coherent incident sources for creating interference patterns on the image plane. By adjusting the grating and the phase retarder, the interference fringes with consecutive frequencies, as well as their orientations and phase shifts, can be generated instantly within a constant interference area. We demonstrate that by adapting this scalable Fourier transform system into lithography, the pixelated nano-fringe arrays with arbitrary frequencies and orientations can be dynamically produced in the photoresist with high variation resolution, suggesting its promising application for large-area functional materials based on space-variant nanostructures in lithography.

  19. Scalable Synthesis of Cortistatin A and Related Structures

    PubMed Central

    Shi, Jun; Manolikakes, Georg; Yeh, Chien-Hung; Guerrero, Carlos A.; Shenvi, Ryan A.; Shigehisa, Hiroki

    2011-01-01

    Full details are provided for an improved synthesis of cortistatin A and related structures as well as the underlying logic and evolution of strategy. The highly functionalized cortistatin A-ring embedded with a key heteroadamantane was synthesized by a simple and scalable 5-step sequence. A chemoselective, tandem geminal dihalogenation of an unactivated methyl group, a reductive fragmentation/trapping/elimination of a bromocyclopropane, and a facile chemoselective etherification reaction afforded the cortistatin A core, dubbed “cortistatinone”. A selective Δ16-alkene reduction with Raney Ni provided cortistatin A. With this scalable and practical route, copious quantities of cortistatinone, Δ16-cortistatin A-the equipotent direct precursor to cortistatin A, and its related analogs were prepared for further biological studies. PMID:21539314

  20. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide.

    PubMed

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-08

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits), and nanoscale sensors based on individual color centers. Toward this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1400 nm diameters. We obtain high collection efficiency of up to 22 kcounts/s optical saturation rates from a single silicon vacancy center while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  1. Scalable parallel distance field construction for large-scale applications

    SciTech Connect

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.

  2. Compressing Test and Evaluation by Using Flow Data for Scalable Network Traffic Analysis

    DTIC Science & Technology

    2014-10-01

    For example, low quality of service may be caused by many factors including high traffic volume (and associated congestion ), proximity of sender...Scalable Network Traffic Analysis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...by ANSI Std Z39-18 788Defense ARJ, October 2014, Vol. 21 No. 4 : 788–802 Compressing Test and Evaluation by Using Data for Scalable Network Traffic

  3. Scalable Combinatorial Tools for Health Disparities Research

    PubMed Central

    Langston, Michael A.; Levine, Robert S.; Kilbourne, Barbara J.; Rogers, Gary L.; Kershenbaum, Anne D.; Baktash, Suzanne H.; Coughlin, Steven S.; Saxton, Arnold M.; Agboto, Vincent K.; Hood, Darryl B.; Litchveld, Maureen Y.; Oyana, Tonny J.; Matthews-Juarez, Patricia; Juarez, Paul D.

    2014-01-01

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject. PMID:25310540

  4. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  5. Scalable combinatorial tools for health disparities research.

    PubMed

    Langston, Michael A; Levine, Robert S; Kilbourne, Barbara J; Rogers, Gary L; Kershenbaum, Anne D; Baktash, Suzanne H; Coughlin, Steven S; Saxton, Arnold M; Agboto, Vincent K; Hood, Darryl B; Litchveld, Maureen Y; Oyana, Tonny J; Matthews-Juarez, Patricia; Juarez, Paul D

    2014-10-10

    Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual's genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  6. Towards Scalable Optimal Sequence Homology Detection

    SciTech Connect

    Daily, Jeffrey A.; Krishnamoorthy, Sriram; Kalyanaraman, Anantharaman

    2012-12-26

    Abstract—The field of bioinformatics and computational biol- ogy is experiencing a data revolution — experimental techniques to procure data have increased in throughput, improved in accuracy and reduced in costs. This has spurred an array of high profile sequencing and data generation projects. While the data repositories represent untapped reservoirs of rich information critical for scientific breakthroughs, the analytical software tools that are needed to analyze large volumes of such sequence data have significantly lagged behind in their capacity to scale. In this paper, we address homology detection, which is a funda- mental problem in large-scale sequence analysis with numerous applications. We present a scalable framework to conduct large- scale optimal homology detection on massively parallel super- computing platforms. Our approach employs distributed memory work stealing to effectively parallelize optimal pairwise alignment computation tasks. Results on 120,000 cores of the Hopper Cray XE6 supercomputer demonstrate strong scaling and up to 2.42 × 107 optimal pairwise sequence alignments computed per second (PSAPS), the highest reported in the literature.

  7. Parallel heuristics for scalable community detection

    DOE PAGES

    Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth

    2015-08-14

    Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less

  8. Scalable multichannel MRI data acquisition system.

    PubMed

    Bodurka, Jerzy; Ledden, Patrick J; van Gelderen, Peter; Chu, Renxin; de Zwart, Jacco A; Morris, Doug; Duyn, Jeff H

    2004-01-01

    A scalable multichannel digital MRI receiver system was designed to achieve high bandwidth echo-planar imaging (EPI) acquisitions for applications such as BOLD-fMRI. The modular system design allows for easy extension to an arbitrary number of channels. A 16-channel receiver was developed and integrated with a General Electric (GE) Signa 3T VH/3 clinical scanner. Receiver performance was evaluated on phantoms and human volunteers using a custom-built 16-element receive-only brain surface coil array. At an output bandwidth of 1 MHz, a 100% acquisition duty cycle was achieved. Overall system noise figure and dynamic range were better than 0.85 dB and 84 dB, respectively. During repetitive EPI scanning on phantoms, the relative temporal standard deviation of the image intensity time-course was below 0.2%. As compared to the product birdcage head coil, 16-channel reception with the custom array yielded a nearly 6-fold SNR gain in the cerebral cortex and a 1.8-fold SNR gain in the center of the brain. The excellent system stability combined with the increased sensitivity and SENSE capabilities of 16-channel coils are expected to significantly benefit and enhance fMRI applications. Published 2003 Wiley-Liss, Inc.

  9. Developing a scalable inert gas ion thruster

    NASA Technical Reports Server (NTRS)

    James, E.; Ramsey, W.; Steiner, G.

    1982-01-01

    Analytical studies to identify and then design a high performance scalable ion thruster operating with either argon or xenon for use in large space systems are presented. The magnetoelectrostatic containment concept is selected for its efficient ion generation capabilities. The iterative nature of the bounding magnetic fields allows the designer to scale both the diameter and length, so that the thruster can be adapted to spacecraft growth over time. Three different thruster assemblies (conical, hexagonal and hemispherical) are evaluated for a 12 cm diameter thruster and performance mapping of the various thruster configurations shows that conical discharge chambers produce the most efficient discharge operation, achieving argon efficiencies of 50-80% mass utilization at 240-310 eV/ion and xenon efficiencies of 60-97% at 240-280 eV/ion. Preliminary testing of the large 30 cm thruster, using argon propellant, indicates a 35% improvement over the 12 cm thruster in mass utilization efficiency. Since initial performance is found to be better than projected, a larger 50 cm thruster is already in the development stage.

  10. Scalable cell alignment on optical media substrates.

    PubMed

    Anene-Nzelu, Chukwuemeka G; Choudhury, Deepak; Li, Huipeng; Fraiszudeen, Azmall; Peh, Kah-Yim; Toh, Yi-Chin; Ng, Sum Huan; Leo, Hwa Liang; Yu, Hanry

    2013-07-01

    Cell alignment by underlying topographical cues has been shown to affect important biological processes such as differentiation and functional maturation in vitro. However, the routine use of cell culture substrates with micro- or nano-topographies, such as grooves, is currently hampered by the high cost and specialized facilities required to produce these substrates. Here we present cost-effective commercially available optical media as substrates for aligning cells in culture. These optical media, including CD-R, DVD-R and optical grating, allow different cell types to attach and grow well on them. The physical dimension of the grooves in these optical media allowed cells to be aligned in confluent cell culture with maximal cell-cell interaction and these cell alignment affect the morphology and differentiation of cardiac (H9C2), skeletal muscle (C2C12) and neuronal (PC12) cell lines. The optical media is amenable to various chemical modifications with fibronectin, laminin and gelatin for culturing different cell types. These low-cost commercially available optical media can serve as scalable substrates for research or drug safety screening applications in industry scales.

  11. Lightweight and scalable secure communication in VANET

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoling; Lu, Yang; Zhu, Xiaojuan; Qiu, Shuwei

    2015-05-01

    To avoid a message to be tempered and forged in vehicular ad hoc network (VANET), the digital signature method is adopted by IEEE1609.2. However, the costs of the method are excessively high for large-scale networks. The paper efficiently copes with the issue with a secure communication framework by introducing some lightweight cryptography primitives. In our framework, point-to-point and broadcast communications for vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) are studied, mainly based on symmetric cryptography. A new issue incurred is symmetric key management. Thus, we develop key distribution and agreement protocols for two-party key and group key under different environments, whether a road side unit (RSU) is deployed or not. The analysis shows that our protocols provide confidentiality, authentication, perfect forward secrecy, forward secrecy and backward secrecy. The proposed group key agreement protocol especially solves the key leak problem caused by members joining or leaving in existing key agreement protocols. Due to aggregated signature and substitution of XOR for point addition, the average computation and communication costs do not significantly increase with the increase in the number of vehicles; hence, our framework provides good scalability.

  12. Wanted: Scalable Tracers for Diffusion Measurements

    PubMed Central

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  13. Wanted: scalable tracers for diffusion measurements.

    PubMed

    Saxton, Michael J

    2014-11-13

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core-shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say "reinforced Ficoll" or "reinforced hyperbranched polyglycerol."

  14. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    SciTech Connect

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  15. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1998-03-01

    Lilith is a general purpose framework, written in Java, that provides a highly scalable distribution of user code across a heterogeneous computing platform. By creation of suitable user code, the Lilith framework can be used for tool development. The scalable performance provided by Lilith is crucial to the development of effective tools for large distributed systems. Furthermore, since Lilith handles the details of code distribution and communication, the user code need focus primarily on the tool functionality, thus, greatly decreasing the time required for tool development. In this paper, the authors concentrate on the use of the Lilith framework to develop scalable tools. The authors review the functionality of Lilith and introduce a typical tool capitalizing on the features of the framework. They present new Objects directly involved with tool creation. They explain details of development and illustrate with an example. They present timing results demonstrating scalability.

  16. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    NASA Astrophysics Data System (ADS)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  17. Scalability improvements to NRLMOL for DFT calculations of large molecules

    NASA Astrophysics Data System (ADS)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  18. Scalable fault tolerant image communication and storage grid

    NASA Astrophysics Data System (ADS)

    Slik, David; Seiler, Oliver; Altman, Tym; Montour, Mike; Kermani, Mohammad; Proseilo, Walter; Terry, David; Kawahara, Midori; Leckie, Chris; Muir, Dale

    2003-05-01

    Increasing production and use of digital medical imagery are driving new approaches to information storage and management. Traditional, centralized approaches to image communication, storage and archiving are becoming increasingly expensive to scale and operate with high levels of reliability. Multi-site, geographically-distributed deployments connected by limited-bandwidth networks present further scalability, reliability, and availability challenges. A grid storage architecture built from a distributed network of low cost, off-the-shelf servers (nodes) provides scalable data and metadata storage, processing, and communication without single points of failure. Imaging studies are stored, replicated, cached, managed, and retrieved based on defined rules, and nodes within the grid can acquire studies and respond to queries. Grid nodes transparently load-balance queries, storage/retrieval requests, and replicate data for automated backup and disaster recovery. This approach reduces latency, increases availability, provides near-linear scalability and allows the creation of a geographically distributed medical imaging network infrastructure. This paper presents some key concepts in grid storage and discusses the results of a clinical deployment of a multi-site storage grid for cancer care in the province of British Columbia.

  19. The intergroup protocols: Scalable group communication for the internet

    SciTech Connect

    Berket, Karlo

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  20. Event metadata records as a testbed for scalable data mining

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  1. A scalable micro-mixer for biomedical applications

    NASA Astrophysics Data System (ADS)

    Cortelezzi, Luca; Ferrari, Simone; Dubini, Angelo

    2016-11-01

    Our study presents a geometrically scalable active micro-mixer suitable for biomedical/bioengineering applications and potentially assimilable in a Lab-on-Chip. We designed our micro-mixer with the goal of satisfying the following constraints: small dimensions, because the device must be able to process volumes of fluid in the range of 10-6 ÷10-9 liters; high mixing speed, because mixing should be obtained in the shortest possible time; constructive simplicity, to facilitate realizability, assimilability and reusability of the micro-mixer; and geometrical scalability, because the micro-mixer should be assimilable to microfluidic systems of different dimensions. We studied numerically the mixing performance of our micro-mixer both in two- and three-dimensions. We characterize the mixing performance in terms of Reynolds, Strouhal and Péclet numbers in order to establish a practical range of operating conditions for our micro-mixer. Finally, we show that our micro-mixer is geometrically scalable, ie., micro-mixers of different geometrical dimensions having the same nondimensional specifications produce nearly the same mixing performance.

  2. A Robust Scalable Transportation System Concept

    NASA Technical Reports Server (NTRS)

    Hahn, Andrew; DeLaurentis, Daniel

    2006-01-01

    This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.

  3. Scalable k-means statistics with Titan.

    SciTech Connect

    Thompson, David C.; Bennett, Janine C.; Pebay, Philippe Pierre

    2009-11-01

    This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.

  4. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  5. Scalability of Localized Arc Filament Plasma Actuators

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2008-01-01

    Temporal flow control of a jet has been widely studied in the past to enhance jet mixing or reduce jet noise. Most of this research, however, has been done using small diameter low Reynolds number jets that often have little resemblance to the much larger jets common in real world applications because the flow actuators available lacked either the power or bandwidth to sufficiently impact these larger higher energy jets. The Localized Arc Filament Plasma Actuators (LAFPA), developed at the Ohio State University (OSU), have demonstrated the ability to impact a small high speed jet in experiments conducted at OSU and the power to perturb a larger high Reynolds number jet in experiments conducted at the NASA Glenn Research Center. However, the response measured in the large-scale experiments was significantly reduced for the same number of actuators compared to the jet response found in the small-scale experiments. A computational study has been initiated to simulate the LAFPA system with additional actuators on a large-scale jet to determine the number of actuators required to achieve the same desired response for a given jet diameter. Central to this computational study is a model for the LAFPA that both accurately represents the physics of the actuator and can be implemented into a computational fluid dynamics solver. One possible model, based on pressure waves created by the rapid localized heating that occurs at the actuator, is investigated using simplified axisymmetric simulations. The results of these simulations will be used to determine the validity of the model before more realistic and time consuming three-dimensional simulations are conducted to ultimately determine the scalability of the LAFPA system.

  6. Parallel Heuristics for Scalable Community Detection

    SciTech Connect

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  7. Scalable Production Method for Graphene Oxide Water Vapor Separation Membranes

    SciTech Connect

    Fifield, Leonard S.; Shin, Yongsoon; Liu, Wei; Gotthold, David W.

    2016-01-01

    ABSTRACT

    Membranes for selective water vapor separation were assembled from graphene oxide suspension using techniques compatible with high volume industrial production. The large-diameter graphene oxide flake suspensions were synthesized from graphite materials via relatively efficient chemical oxidation steps with attention paid to maintaining flake size and achieving high graphene oxide concentrations. Graphene oxide membranes produced using scalable casting methods exhibited water vapor flux and water/nitrogen selectivity performance meeting or exceeding that of membranes produced using vacuum-assisted laboratory techniques. (PNNL-SA-117497)

  8. Scalable Domain Decomposed Monte Carlo Particle Transport

    SciTech Connect

    O'Brien, Matthew Joseph

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  9. Scalable microreactors and methods for using same

    DOEpatents

    Lawal, Adeniyi; Qian, Dongying

    2010-03-02

    The present invention provides a scalable microreactor comprising a multilayered reaction block having alternating reaction plates and heat exchanger plates that have a plurality of microchannels; a multilaminated reactor input manifold, a collecting reactor output manifold, a heat exchange input manifold and a heat exchange output manifold. The present invention also provides methods of using the microreactor for multiphase chemical reactions.

  10. Physical principles for scalable neural recording

    PubMed Central

    Zamft, Bradley M.; Maguire, Yael G.; Shapiro, Mikhail G.; Cybulski, Thaddeus R.; Glaser, Joshua I.; Amodei, Dario; Stranges, P. Benjamin; Kalhor, Reza; Dalrymple, David A.; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M.; Carmena, Jose M.; Rabaey, Jan M.; Boyden, Edward S.; Church, George M.; Kording, Konrad P.

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power–bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  11. Scalable Machine Learning for Massive Astronomical Datasets

    NASA Astrophysics Data System (ADS)

    Ball, Nicholas M.; Gray, A.

    2014-04-01

    We present the ability to perform data mining and machine learning operations on a catalog of half a billion astronomical objects. This is the result of the combination of robust, highly accurate machine learning algorithms with linear scalability that renders the applications of these algorithms to massive astronomical data tractable. We demonstrate the core algorithms kernel density estimation, K-means clustering, linear regression, nearest neighbors, random forest and gradient-boosted decision tree, singular value decomposition, support vector machine, and two-point correlation function. Each of these is relevant for astronomical applications such as finding novel astrophysical objects, characterizing artifacts in data, object classification (including for rare objects), object distances, finding the important features describing objects, density estimation of distributions, probabilistic quantities, and exploring the unknown structure of new data. The software, Skytree Server, runs on any UNIX-based machine, a virtual machine, or cloud-based and distributed systems including Hadoop. We have integrated it on the cloud computing system of the Canadian Astronomical Data Centre, the Canadian Advanced Network for Astronomical Research (CANFAR), creating the world's first cloud computing data mining system for astronomy. We demonstrate results showing the scaling of each of our major algorithms on large astronomical datasets, including the full 470,992,970 objects of the 2 Micron All-Sky Survey (2MASS) Point Source Catalog. We demonstrate the ability to find outliers in the full 2MASS dataset utilizing multiple methods, e.g., nearest neighbors. This is likely of particular interest to the radio astronomy community given, for example, that survey projects contain groups dedicated to this topic. 2MASS is used as a proof-of-concept dataset due to its convenience and availability. These results are of interest to any astronomical project with large and/or complex

  12. Physical principles for scalable neural recording.

    PubMed

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  13. Responsive, Flexible and Scalable Broader Impacts (Invited)

    NASA Astrophysics Data System (ADS)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    investment of time. Initiated in summer 2010, the webinars are interactive and highly flexible: people can participate from their homes anywhere and can interact according to their comfort levels (i.e., submitting questions in “chat boxes” rather than orally). Expansion - To expand scientists’ research beyond educators attending a workshop or webinar, COSEE-OS uses a blog as an additional mode of communication. Topically focused by concept maps, blogs serve as a forum for scalable content. The varied types of formatting allow scientists to create long-lived resources that remain attributed to them while supporting sustained educator engagement. Blogs are another point of contact and allow educators further asynchronous access to scientists. Based on COSEE-OS evaluations, interacting on a blog was found to be educators’ preferred method of following up with scientists. Sustained engagement of scientists or educators requires a specific return on investment. Workshops and web tools can be used together to maximize scientist impact with a relatively small investment of time. As one educator stated, “It really helps my students’ interest when we discuss concepts and I tell them my knowledge comes directly from a scientist!” [A. deCharon et al. (2009), Online tools help get scientists and educators on the same page, Eos Transactions, American Geophysical Union, 90(34), 289-290.

  14. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  15. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  16. Scalable C-H Oxidation with Copper: Synthesis of Polyoxypregnanes.

    PubMed

    See, Yi Yang; Herrmann, Aaron T; Aihara, Yoshinori; Baran, Phil S

    2015-11-04

    Steroids bearing C12 oxidations are widespread in nature, yet only one preparative chemical method addresses this challenge in a low-yielding and not fully understood fashion: Schönecker's Cu-mediated oxidation. This work shines new light onto this powerful C-H oxidation method through mechanistic investigation, optimization, and wider application. Culminating in a scalable, rapid, high-yielding, and operationally simple protocol, this procedure is applied to the first synthesis of several parent polyoxypregnane natural products, representing a gateway to over 100 family members.

  17. Simplex-stochastic collocation method with improved scalability

    SciTech Connect

    Edeling, W.N.; Dwight, R.P.; Cinnella, P.

    2016-04-01

    The Simplex-Stochastic Collocation (SSC) method is a robust tool used to propagate uncertain input distributions through a computer code. However, it becomes prohibitively expensive for problems with dimensions higher than 5. The main purpose of this paper is to identify bottlenecks, and to improve upon this bad scalability. In order to do so, we propose an alternative interpolation stencil technique based upon the Set-Covering problem, and we integrate the SSC method in the High-Dimensional Model-Reduction framework. In addition, we address the issue of ill-conditioned sample matrices, and we present an analytical map to facilitate uniformly-distributed simplex sampling.

  18. Scalable syntheses of the BET bromodomain inhibitor JQ1.

    PubMed

    Syeda, Shameem Sultana; Jakkaraj, Sudhakar; Georg, Gunda I

    2015-06-03

    We have developed methods involving the use of alternate, safer reagents for the scalable syntheses of the potent BET bromodomain inhibitor JQ1. A one-pot three step method, involving the conversion of a benzodiazepine to a thioamde using Lawesson's reagent, followed by amidrazone formation and installation of the triazole moiety furnished JQ1. This method provides good yields and a facile purification process. For the synthesis of enantiomerically enriched (+)-JQ1, the highly toxic reagent diethyl chlorophosphate, used in a previous synthesis, was replaced with the safer reagent diphenyl chlorophosphate in the three-step one-pot triazole formation without effecting yields and enantiomeric purity of (+)-JQ1.

  19. Scalable orbital-angular-momentum sorting without destroying photon states

    NASA Astrophysics Data System (ADS)

    Wang, Fang-Xiang; Chen, Wei; Yin, Zhen-Qiang; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu

    2016-09-01

    Single photons with orbital angular momentum (OAM) have attracted substantial attention from researchers. A single photon can carry infinite OAM values theoretically. Thus, OAM photon states have been widely used in quantum information and fundamental quantum mechanics. Although there have been many methods for sorting quantum states with different OAM values, the nondestructive and efficient sorter of high-dimensional OAM remains a fundamental challenge. Here, we propose a scalable OAM sorter which can categorize different OAM states simultaneously, meanwhile, preserving both OAM and spin angular momentum. Fundamental elements of the sorter are composed of symmetric multiport beam splitters (BSs) and Dove prisms with cascading structure, which in principle can be flexibly and effectively combined to sort arbitrarily high-dimensional OAM photons. The scalable structures proposed here greatly reduce the number of BSs required for sorting high-dimensional OAM states. In view of the nondestructive and extensible features, the sorters can be used as fundamental devices not only for high-dimensional quantum information processing, but also for traditional optics.

  20. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    PubMed Central

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  1. Scalable extensions of HEVC for next generation services

    NASA Astrophysics Data System (ADS)

    Misra, Kiran; Segall, Andrew; Zhao, Jie; Kim, Seung-Hwan

    2013-02-01

    The high efficiency video coding (HEVC) standard being developed by ITU-T VCEG and ISO/IEC MPEG achieves a compression goal of reducing the bitrate by half for the same visual quality when compared with earlier video compression standards such as H.264/AVC. It achieves this goal with the use of several new tools such as quad-tree based partitioning of data, larger block sizes, improved intra prediction, the use of sophisticated prediction of motion information, inclusion of an in-loop sample adaptive offset process etc. This paper describes an approach where the HEVC framework is extended to achieve spatial scalability using a multi-loop approach. The enhancement layer inter-predictive coding efficiency is improved by including within the decoded picture buffer multiple up-sampled versions of the decoded base layer picture. This approach has the advantage of achieving significant coding gains with a simple extension of the base layer tools such as inter-prediction, motion information signaling etc. Coding efficiency of the enhancement layer is further improved using adaptive loop filter and internal bit-depth increment. The performance of the proposed scalable video coding approach is compared to simulcast transmission of video data using high efficiency model version 6.1 (HM-6.1). The bitrate savings are measured using Bjontegaard Delta (BD) rate for a spatial scalability factor of 2 and 1.5 respectively when compared with simulcast anchors. It is observed that the proposed approach provides an average luma BD rate gains of 33.7% and 50.5% respectively.

  2. Scalable Molecular Dynamics with NAMD

    PubMed Central

    Phillips, James C.; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D.; Kalé, Laxmikant; Schulten, Klaus

    2008-01-01

    NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This paper, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Next, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, e.g., the Tcl scripting language. Finally, the paper provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. PMID:16222654

  3. Scalable molecular dynamics with NAMD.

    PubMed

    Phillips, James C; Braun, Rosemary; Wang, Wei; Gumbart, James; Tajkhorshid, Emad; Villa, Elizabeth; Chipot, Christophe; Skeel, Robert D; Kalé, Laxmikant; Schulten, Klaus

    2005-12-01

    NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and laptop computers. NAMD works with AMBER and CHARMM potential functions, parameters, and file formats. This article, directed to novices as well as experts, first introduces concepts and methods used in the NAMD program, describing the classical molecular dynamics force field, equations of motion, and integration methods along with the efficient electrostatics evaluation algorithms employed and temperature and pressure controls used. Features for steering the simulation across barriers and for calculating both alchemical and conformational free energy differences are presented. The motivations for and a roadmap to the internal design of NAMD, implemented in C++ and based on Charm++ parallel objects, are outlined. The factors affecting the serial and parallel performance of a simulation are discussed. Finally, typical NAMD use is illustrated with representative applications to a small, a medium, and a large biomolecular system, highlighting particular features of NAMD, for example, the Tcl scripting language. The article also provides a list of the key features of NAMD and discusses the benefits of combining NAMD with the molecular graphics/sequence analysis software VMD and the grid computing/collaboratory software BioCoRE. NAMD is distributed free of charge with source code at www.ks.uiuc.edu. (c) 2005 Wiley Periodicals, Inc.

  4. Scalable descriptive and correlative statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.

  5. Scalable, Flexible and Active Learning on Distributions

    DTIC Science & Technology

    2016-09-01

    Scalable, Flexible and Active Learning on Distributions Dougal J. Sutherland CMU-CS-16-128 September 2016 School of Computer Science Carnegie Mellon...sponsoring institution, the U.S. government or any other entity. Keywords: kernel methods, approximate embeddings, statistical machine learning , nonpara...metric statistics, two-sample testing, active learning . To my parents. Abstract A wide range of machine learning problems, including astronomical

  6. Scalable and Sustainable Electrochemical Allylic C–H Oxidation

    PubMed Central

    Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D.; Baran, Phil S.

    2016-01-01

    New methods and strategies for the direct functionalization of C–H bonds are beginning to reshape the fabric of retrosynthetic analysis, impacting the synthesis of natural products, medicines, and even materials1. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C–H functionalization due to the utility of enones and allylic alcohols as versatile intermediates, along with their prevalence in natural and unnatural materials2. Allylic oxidations have been featured in hundreds of syntheses, including some natural product syntheses regarded as “classics”3. Despite many attempts to improve the efficiency and practicality of this powerful transformation, the vast majority of conditions still employ highly toxic reagents (based around toxic elements such as chromium, selenium, etc.) or expensive catalysts (palladium, rhodium, etc.)2. These requirements are highly problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. As such, this oxidation strategy is rarely embraced for large-scale synthetic applications, limiting the adoption of this important retrosynthetic strategy by industrial scientists. In this manuscript, we describe an electrochemical solution to this problem that exhibits broad substrate scope, operational simplicity, and high chemoselectivity. This method employs inexpensive and readily available materials, representing the first example of a scalable allylic C–H oxidation (demonstrated on 100 grams), finally opening the door for the adoption of this C–H oxidation strategy in large-scale industrial settings without significant environmental impact. PMID:27096371

  7. Scalable Domain Decomposed Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    O'Brien, Matthew Joseph

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.

  8. Star pinch scalable EUV source

    NASA Astrophysics Data System (ADS)

    McGeoch, Malcolm W.; Pike, Charles T.

    2003-06-01

    A new direct discharge source of 13.5nm radiation addresses the heat load problem by creating the plasma remote from all surfaces. The plasma is initially formed at the intersection of many pulsed xenon beamlets. Further heating is then applied via a high current pulse to induce efficient radiation from Xe10+ ions. The plasma is compact, with a single pulse FWHM diameter of 0.7mm and length of 3mm. It is positionally stable, as illustrated by re-imaging onto a fluorescent screen sensitive to EUV and time-integrating over 250 pulses. In this mode the averaged FWHM is 0.9mm. The conversion efficiency from stored electrical energy to radiation within 2π sterad and 2% bandwidth at 13.5nm is currently 0.55%, using xenon. Power is delivered to the plasma by a solid state-switched modulator operated at a stored energy of 25J of which 10J is dissipated in the plasma plus circuit, and 15J is recovered. The EUV output in 2% bandwidth at 13.5nm is 9mJ/sterad. Repetition rate scaling of the star pinch EUV source to 1kHz there is negligible electrode erosion at 106 pulses. This is possible because the cathode for the main heating discharge is distributed into 24-fold parallel hollow cathodes, with a combined operational surface aera of approximately 20cm2. The anode is similarly distributed. The walls facing the plasma are 22mm distant from it and when scaled to 6kHz will see a heat load of less than 1kWcm-2. The cathode electrode is then expected to receive a heat load of less than 500W cm-2. The plasma is expected to clear between pulses and be reproducible at frequencies up to at least 10kHz, at which rate the usable EUV power available at a second focus, assuming colleciton in 2sterad, is predicted to be more than 80W. The star pinch has properties that favor long life and appears to scale to the 50-100W powers needed for high throughput lithography.

  9. Laplacian embedded regression for scalable manifold regularization.

    PubMed

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  10. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    NASA Astrophysics Data System (ADS)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  11. Scalable and sustainable electrochemical allylic C-H oxidation.

    PubMed

    Horn, Evan J; Rosen, Brandon R; Chen, Yong; Tang, Jiaze; Chen, Ke; Eastgate, Martin D; Baran, Phil S

    2016-05-05

    New methods and strategies for the direct functionalization of C-H bonds are beginning to reshape the field of retrosynthetic analysis, affecting the synthesis of natural products, medicines and materials. The oxidation of allylic systems has played a prominent role in this context as possibly the most widely applied C-H functionalization, owing to the utility of enones and allylic alcohols as versatile intermediates, and their prevalence in natural and unnatural materials. Allylic oxidations have featured in hundreds of syntheses, including some natural product syntheses regarded as "classics". Despite many attempts to improve the efficiency and practicality of this transformation, the majority of conditions still use highly toxic reagents (based around toxic elements such as chromium or selenium) or expensive catalysts (such as palladium or rhodium). These requirements are problematic in industrial settings; currently, no scalable and sustainable solution to allylic oxidation exists. This oxidation strategy is therefore rarely used for large-scale synthetic applications, limiting the adoption of this retrosynthetic strategy by industrial scientists. Here we describe an electrochemical C-H oxidation strategy that exhibits broad substrate scope, operational simplicity and high chemoselectivity. It uses inexpensive and readily available materials, and represents a scalable allylic C-H oxidation (demonstrated on 100 grams), enabling the adoption of this C-H oxidation strategy in large-scale industrial settings without substantial environmental impact.

  12. Efficient Buffer Management for Scalable Media-on-Demand

    NASA Astrophysics Data System (ADS)

    Waldvogel, Marcel; Deng, Wei; Janakiraman, Ramaprabhu

    2003-01-01

    Widespread availability of high-speed networks and fast, cheap computation have rendered high-quality Media-on-Demand (MoD) feasible. Research on scalable MoD has resulted in many efficient schemes that involve segmentation and asynchronous broadcast of media data, requiring clients to buffer and reorder out-of-order segments efficiently for serial playout. In such schemes, buffer space requirements run to several hundred megabytes and hence require efficient buffer management techniques involving both primary memory and secondary storage: while disk sizes have increased exponentially, access speeds have not kept pace at all. The conversion of out-of-order arrival to in-order playout suggests the use of external memory priority queues, but their content-agnostic nature prevents them from performing well under MoD loads. In this paper, we propose and evaluate a series of simple heuristic schemes which, in simulation studies and in combination with our scalable MoD scheme, achieve significant improvements in storage performance over existing schemes.

  13. Garuda: a scalable tiled display wall using commodity PCs.

    PubMed

    Nirnimesh; Harish, Pawan; Narayanan, P J

    2007-01-01

    Cluster-based tiled display walls can provide cost-effective and scalable displays with high resolution and a large display area. The software to drive them needs to scale too if arbitrarily large displays are to be built. Chromium is a popular software API used to construct such displays. Chromium transparently renders any OpenGL application to a tiled display by partitioning and sending individual OpenGL primitives to each client per frame. Visualization applications often deal with massive geometric data with millions of primitives. Transmitting them every frame results in huge network requirements that adversely affect the scalability of the system. In this paper, we present Garuda, a client-server-based display wall framework that uses off-the-shelf hardware and a standard network. Garuda is scalable to large tile configurations and massive environments. It can transparently render any application built using the Open Scene Graph (OSG) API to a tiled display without any modification by the user. The Garuda server uses an object-based scene structure represented using a scene graph. The server determines the objects visible to each display tile using a novel adaptive algorithm that culls the scene graph to a hierarchy of frustums. Required parts of the scene graph are transmitted to the clients, which cache them to exploit the interframe redundancy. A multicast-based protocol is used to transmit the geometry to exploit the spatial redundancy present in tiled display systems. A geometry push philosophy from the server helps keep the clients in sync with one another. Neither the server nor a client needs to render the entire scene, making the system suitable for interactive rendering of massive models. Transparent rendering is achieved by intercepting the cull, draw, and swap functions of OSG and replacing them with our own. We demonstrate the performance and scalability of the Garuda system for different configurations of display wall. We also show that the

  14. Facile, one-pot and scalable synthesis of highly emissive aqueous-based Ag,Ni:ZnCdS/ZnS core/shell quantum dots with high chemical and optical stability.

    PubMed

    Sahraei, Reza; Soheyli, Ehsan; Faraji, Zahra; Soleiman-Beigi, Mohammad

    2017-10-11

    We report here a one pot, mild and low cost aqueous-based synthetic route for preparation of colloidally stable and highly luminescent dual-doped Ag,Ni:ZnCdS/ZnS core/shell quantum dots (QDs). The pure dopant emission of the Ni-doped core/shell quantum dots was found to be highly effected at the presence of second dopant ion (Ag+). Results showed that the PL emission intensity increases while its peak position experiences an obvious blue shift with increasing the content of Ag+ ions. Regarding the optical observations, we simply provide a scheme for absorption-recombination processes of the carriers through impurity centers. To obtain an optimum conditions with better emission characteristic, we also study the effect of different reaction parameters such as: refluxing temperature, core and shell solutions pH, molar ratio of the dopant ions (Ni:(Zn+Cd) and Ag:(Zn+Cd)), and concentration of the core and shell precursors. Nonetheless, the most effective parameter is the presence of the ZnS shell with suitable amount to eliminate the surface trap states and enhance their emission intensity. It can also, improve the bio-compatibility of the prepared QDs by restricting the Cd2+ toxic ions inside the core of the QDs. The present suggested route was also yielded to remarkable optical and chemical stability of the colloidal QDs which introduce them as a decent kind of nano-scale structures for light emitting applications, especially in the biological technologies. The suggested process has also this interesting potential to be scaled-up while remaining the emission characteristics and structural quality which is inevitable for industrial applications in optoelectronic devices. © 2017 IOP Publishing Ltd.

  15. Vertical nanowire electrode arrays as a scalable platform for intracellular interfacing to neuronal circuits

    PubMed Central

    Robinson, Jacob T.; Jorgolli, Marsela; Shalek, Alex K.; Yoon, Myung-Han; Gertner, Rona S.; Park, Hongkun

    2014-01-01

    Deciphering the neuronal code - the rules by which neuronal circuits store and process information - is a major scientific challenge1,2. Currently, these efforts are impeded by a lack of experimental tools that are sensitive enough to quantify the strength of individual synaptic connections and also scalable enough to simultaneously measure and control a large number of mammalian neurons with single-cell resolution3,4. Here, we report a scalable intracellular electrode platform based on vertical nanowires that affords parallel electrical interfacing to multiple mammalian neurons. Specifically, we show that our vertical nanowire electrode arrays (VNEAs) can intracellularly record and stimulate neuronal activity in dissociated cultures of rat cortical neurons and can also be used to map multiple individual synaptic connections. The scalability of this platform, combined with its compatibility with silicon nanofabrication techniques, provides a clear path toward simultaneous, high-fidelity interfacing with hundreds of individual neurons. PMID:22231664

  16. Scalable tuning of building models to hourly data

    DOE PAGES

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnosticmore » methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.« less

  17. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    NASA Astrophysics Data System (ADS)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  18. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  19. Scalable tuning of building models to hourly data

    SciTech Connect

    Garrett, Aaron; New, Joshua Ryan

    2015-03-31

    Energy models of existing buildings are unreliable unless calibrated so they correlate well with actual energy usage. Manual tuning requires a skilled professional, is prohibitively expensive for small projects, imperfect, non-repeatable, non-transferable, and not scalable to the dozens of sensor channels that smart meters, smart appliances, and cheap/ubiquitous sensors are beginning to make available today. A scalable, automated methodology is needed to quickly and intelligently calibrate building energy models to all available data, increase the usefulness of those models, and facilitate speed-and-scale penetration of simulation-based capabilities into the marketplace for actualized energy savings. The "Autotune'' project is a novel, model-agnostic methodology which leverages supercomputing, large simulation ensembles, and big data mining with multiple machine learning algorithms to allow automatic calibration of simulations that match measured experimental data in a way that is deployable on commodity hardware. This paper shares several methodologies employed to reduce the combinatorial complexity to a computationally tractable search problem for hundreds of input parameters. Furthermore, accuracy metrics are provided which quantify model error to measured data for either monthly or hourly electrical usage from a highly-instrumented, emulated-occupancy research home.

  20. Scalable and Fault Tolerant Failure Detection and Consensus

    SciTech Connect

    Katti, Amogh; Di Fatta, Giuseppe; Naughton III, Thomas J; Engelmann, Christian

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  1. MPE graphics -- Scalable X11 graphics in MPI

    SciTech Connect

    Gropp, W.; Karrels, E.; Lusk, E.

    1994-12-31

    As parallel programs enter the mainstream, they need to provide the same facilities and ease-of-use features expected of uniprocessor programs. For many applications, this means that they need to provide graphical output. This talk discusses a library of routines that provide scalable X Window System graphics. These routines make use of the MPI message-passing standard to provide a safe and reliable system that can be easily used in parallel programs. At the same time they encapsulate commonly-used services to provide a convenient interface to X graphics facilities. The easiest way to provide X11 graphics to a parallel program is to allow each process to draw on the same X11 Window. That is, each process opens a connection to the X11 server and draws directly to it. In one sense, this is as scalable a system as possible, since the single graphics display is an unavoidable point of sequential access. However, in reality, an X server can only accept a relatively small number of connections. In addition, the latency associated with each transmission between a parallel process and the X Window server is relatively high. This talk addresses these issues.

  2. The Node Monitoring Component of a Scalable Systems Software Environment

    SciTech Connect

    Miller, Samuel James

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  3. Performance and scalability aspects of directory-based cache coherence in shared-memory multiprocessors

    SciTech Connect

    Picano, S.; Meyer, D.G.; Brooks, E.D. III; Hoag, J.E.

    1993-05-01

    We present a study that accentuates the performance and scalability aspects of directory-based cache coherence in multiprocessor systems. Using a multiprocessor with a software-based coherence scheme, efficient implementations rely heavily on the programmer`s ability to explicitly manage the memory system, which is typically handled by hardware support on other bus-based, shared memory multiprocessors. We describe a scalable, shared memory, cache coherent multiprocessor and present simulation results obtained on three parallel programs. This multiprocessor configuration exhibits high performance at no additional parallel programming cost.

  4. Scalable quantum computing in the presence of large detected-error rates

    SciTech Connect

    Knill, E.

    2005-04-01

    The theoretically tolerable erasure error rate for scalable quantum computing is shown to be well above 0.1, given standard scalability assumptions. This bound is obtained by implementing computations with generic stabilizer code teleportation steps that combine the necessary operations with error correction. An interesting consequence of the technique is that the only errors that affect the maximum tolerable error rate are storage and Bell measurement errors. If storage errors are negligible, then any detected Bell measurement error below 1/2 is permissible. For practical computation with high detected error rates, the implementation overheads need to be improved.

  5. Systematic Optimization of Battery Materials: Key Parameter Optimization for the Scalable Synthesis of Uniform, High-Energy, and High Stability LiNi0.6Mn0.2Co0.2O2 Cathode Material for Lithium-Ion Batteries.

    PubMed

    Ren, Dong; Shen, Yun; Yang, Yao; Shen, Luxi; Levin, Barnaby D A; Yu, Yingchao; Muller, David A; Abruña, Héctor D

    2017-10-06

    Ni-rich LiNixMnyCo1-x-yO2 (x > 0.5) (NMC) materials have attracted a great deal of interest as promising cathode candidates for Li-ion batteries due to their low cost and high energy density. However, several issues, including sensitivity to moisture, difficulty in reproducibly preparing well-controlled morphology particles and, poor cyclability, have hindered their large scale deployment; especially for electric vehicle (EV) applications. In this work, we have developed a uniform, highly stable, high-energy density, Ni-rich LiNi0.6Mn0.2Co0.2O2 cathode material by systematically optimizing synthesis parameters, including pH, stirring rate, and calcination temperature. The particles exhibit a spherical morphology and uniform size distribution, with a well-defined structure and homogeneous transition-metal distribution, owing to the well-controlled synthesis parameters. The material exhibited superior electrochemical properties, when compared to a commercial sample, with an initial discharge capacity of 205 mAh/g at 0.1 C. It also exhibited a remarkable rate capability with discharge capacities of 157 mAh/g and 137 mAh/g at 10 and 20 C, respectively, as well as high tolerance to air and moisture. In order to demonstrate incorporation into a commercial scale EV, a large-scale 4.7 Ah LiNi0.6Mn0.2Co0.2O2 Al-full pouch cell with a high cathode loading of 21.6 mg/cm(2), paired with a graphite anode, was fabricated. It exhibited exceptional cyclability with a capacity retention of 96% after 500 cycles at room temperature. This material, which was obtained by a fully optimized scalable synthesis, delivered combined performance metrics that are among the best for NMC materials reported to date.

  6. Tip-Based Nanofabrication for Scalable Manufacturing

    DOE PAGES

    Hu, Huan; Kim, Hoe; Somnath, Suhas

    2017-03-16

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  7. Scalable Unix tools on parallel processors

    SciTech Connect

    Gropp, W.; Lusk, E.

    1994-12-31

    The introduction of parallel processors that run a separate copy of Unix on each process has introduced new problems in managing the user`s environment. This paper discusses some generalizations of common Unix commands for managing files (e.g. 1s) and processes (e.g. ps) that are convenient and scalable. These basic tools, just like their Unix counterparts, are text-based. We also discuss a way to use these with a graphical user interface (GUI). Some notes on the implementation are provided. Prototypes of these commands are publicly available.

  8. Overcoming Scalability Challenges for Tool Daemon Launching

    SciTech Connect

    Ahn, D H; Arnold, D C; de Supinski, B R; Lee, G L; Miller, B P; Schulz, M

    2008-02-15

    Many tools that target parallel and distributed environments must co-locate a set of daemons with the distributed processes of the target application. However, efficient and portable deployment of these daemons on large scale systems is an unsolved problem. We overcome this gap with LaunchMON, a scalable, robust, portable, secure, and general purpose infrastructure for launching tool daemons. Its API allows tool builders to identify all processes of a target job, launch daemons on the relevant nodes and control daemon interaction. Our results show that Launch-MON scales to very large daemon counts and substantially enhances performance over existing ad hoc mechanisms.

  9. First experience with the scalable coherent interface

    SciTech Connect

    Mueller, H. . ECP Division); RD24 Collaboration

    1994-02-01

    The research project RD24 is studying applications of the Scalable Coherent Interface (IEEE-1596) standard for the large hadron collider (LHC). First SCI node chips from Dolphin were used to demonstrate the use and functioning of SCI's packet protocols and to measure data rates. The authors present results from a first, two-node SCI ringlet at CERN, based on a R3000 RISC processor node and DMA node on a MC68040 processor bus. A diagnostic link analyzer monitors the SCI packet protocols up to full link bandwidth. In its second phase, RD24 will build a first implementation of a multi-ringlet SCI data merger.

  10. Scalable networks for discrete quantum random walks

    SciTech Connect

    Fujiwara, S.; Osaki, H.; Buluta, I.M.; Hasegawa, S.

    2005-09-15

    Recently, quantum random walks (QRWs) have been thoroughly studied in order to develop new quantum algorithms. In this paper we propose scalable quantum networks for discrete QRWs on circles, lines, and also in higher dimensions. In our method the information about the position of the walker is stored in a quantum register and the network consists of only one-qubit rotation and (controlled){sup n}-NOT gates, therefore it is purely computational and independent of the physical implementation. As an example, we describe the experimental realization in an ion-trap system.

  11. Towards a Scalable, Biomimetic, Antibacterial Coating

    NASA Astrophysics Data System (ADS)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  12. Agroinfiltration as an Effective and Scalable Strategy of Gene Delivery for Production of Pharmaceutical Proteins.

    PubMed

    Chen, Qiang; Lai, Huafang; Hurtado, Jonathan; Stahnke, Jake; Leuzinger, Kahlin; Dent, Matthew

    2013-06-01

    Current human biologics are most commonly produced by mammalian cell culture-based fermentation technologies. However, its limited scalability and high cost prevent this platform from meeting the ever increasing global demand. Plants offer a novel alternative system for the production of pharmaceutical proteins that is more scalable, cost-effective, and safer than current expression paradigms. The recent development of deconstructed virus-based vectors has allowed rapid and high-level transient expression of recombinant proteins, and in turn, provided a preferred plant based production platform. One of the remaining challenges for the commercial application of this platform was the lack of a scalable technology to deliver the transgene into plant cells. Therefore, this review focuses on the development of an effective and scalable technology for gene delivery in plants. Direct and indirect gene delivery strategies for plant cells are first presented, and the two major gene delivery technologies based on agroinfiltration are subsequently discussed. Furthermore, the advantages of syringe and vacuum infiltration as gene delivery methodologies are extensively discussed, in context of their applications and scalability for commercial production of human pharmaceutical proteins in plants. The important steps and critical parameters for the successful implementation of these strategies are also detailed in the review. Overall, agroinfiltration based on syringe and vacuum infiltration provides an efficient, robust and scalable gene-delivery technology for the transient expression of recombinant proteins in plants. The development of this technology will greatly facilitate the realization of plant transient expression systems as a premier platform for commercial production of pharmaceutical proteins.

  13. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores

    PubMed Central

    2014-01-01

    Background There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. Results This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. Conclusions In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler PMID:25253533

  14. Young Investigator Program: Modular Paradigm for Scalable Quantum Information

    DTIC Science & Technology

    2016-03-04

    AFRL-AFOSR-VA-TR-2016-0120 Modular Paradigm for Scalable Quantum Information Paola Cappellaro MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final Report 03...AND SUBTITLE Young Investigator Program: Modular Paradigm for Scalable Quantum Information 5a. CONTRACT NUMBER FA9550-12-1-0292 5b. GRANT NUMBER...Modular Paradigm for Scalable Quantum Information” was to address some of the challenges facing the field of quantum information science (QIS). The

  15. Lilith: A scalable secure tool for massively parallel distributed computing

    SciTech Connect

    Armstrong, R.C.; Camp, L.J.; Evensky, D.A.; Gentile, A.C.

    1997-06-01

    Changes in high performance computing have necessitated the ability to utilize and interrogate potentially many thousands of processors. The ASCI (Advanced Strategic Computing Initiative) program conducted by the United States Department of Energy, for example, envisions thousands of distinct operating systems connected by low-latency gigabit-per-second networks. In addition multiple systems of this kind will be linked via high-capacity networks with latencies as low as the speed of light will allow. Code which spans systems of this sort must be scalable; yet constructing such code whether for applications, debugging, or maintenance is an unsolved problem. Lilith is a research software platform that attempts to answer these questions with an end toward meeting these needs. Presently, Lilith exists as a test-bed, written in Java, for various spanning algorithms and security schemes. The test-bed software has, and enforces, hooks allowing implementation and testing of various security schemes.

  16. Efficient and scalable serial extraction of DNA and RNA from frozen tissue samples.

    PubMed

    Mathot, Lucy; Lindman, Monica; Sjöblom, Tobias

    2011-01-07

    Advances in cancer genomics have created a demand for scalable sample processing. We here present a process for serial extraction of nucleic acids from the same frozen tissue sample based on magnetic silica particles. The process is automation friendly with high recoveries of pure DNA and RNA suitable for analysis.

  17. Scalable Video Streaming Adaptive to Time-Varying IEEE 802.11 MAC Parameters

    NASA Astrophysics Data System (ADS)

    Lee, Kyung-Jun; Suh, Doug-Young; Park, Gwang-Hoon; Huh, Jae-Doo

    This letter proposes a QoS control method for video streaming service over wireless networks. Based on statistical analysis, the time-varying MAC parameters highly related to channel condition are selected to predict available bitrate. Adaptive bitrate control of scalably-encoded video guarantees continuity in streaming service even if the channel condition changes abruptly.

  18. An Open Infrastructure for Scalable, Reconfigurable Analysis

    SciTech Connect

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  19. Towards Scalable Graph Computation on Mobile Devices

    PubMed Central

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  20. Scalable enantioselective total synthesis of taxanes

    NASA Astrophysics Data System (ADS)

    Mendoza, Abraham; Ishihara, Yoshihiro; Baran, Phil S.

    2012-01-01

    Taxanes form a large family of terpenes comprising over 350 members, the most famous of which is Taxol (paclitaxel), a billion-dollar anticancer drug. Here, we describe the first practical and scalable synthetic entry to these natural products via a concise preparation of (+)-taxa-4(5),11(12)-dien-2-one, which has a suitable functional handle with which to access more oxidized members of its family. This route enables a gram-scale preparation of the ‘parent’ taxane—taxadiene—which is the largest quantity of this naturally occurring terpene ever isolated or prepared in pure form. The characteristic 6-8-6 tricyclic system of the taxane family, containing a bridgehead alkene, is forged via a vicinal difunctionalization/Diels-Alder strategy. Asymmetry is introduced by means of an enantioselective conjugate addition that forms an all-carbon quaternary centre, from which all other stereocentres are fixed through substrate control. This study lays a critical foundation for a planned access to minimally oxidized taxane analogues and a scalable laboratory preparation of Taxol itself.

  1. Towards Scalable Graph Computation on Mobile Devices.

    PubMed

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  2. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  3. Scalable Track Initiation for Optical Space Surveillance

    NASA Astrophysics Data System (ADS)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    The advent of high-sensitivity, high-capacity optical sensors for space surveillance presents us with interesting and challenging tracking problems. Accounting for the origin of every detection made by such systems is generally agreed to belong to the "most difficult" category of tracking problems. Especially in the early phases of the tracking scenario, when a catalog of targets is being compiled, or when many new objects appear in space because of on-orbit explosion or collision, one faces a combinatorially large number of orbit (data association) hypotheses to evaluate. The number of hypotheses is reduced to a more feasible number if observations close together in time can, with high confidence, be associated by the sensor into extended tracks on single objects. Most current space surveillance techniques are predicated on the sensor systems' ability to form such tracks reliably. However, the required operational tempo of space surveillance, the very large number of objects in Earth orbit and the difficulties of detecting dim, fast-moving targets at long ranges means that individual sensor track reports are often inadequate for computing initial orbit hypotheses. In fact, this situation can occur with optical sensors even when the probability of detection is high. For example, the arc of orbit that has been observed may be too short or may have been sampled too sparsely to allow well-conditioned, usable orbit estimates from single tracks. In that case, one has no choice but to solve a data association problem involving an unknown number of targets and many widely spaced observations of uncertain origin. In the present paper, we are motivated by this more difficult aspect of the satellite cataloging problem. However, the results of this analysis may find use in a variety of less stressing tracking applications. The computational complexity of track initiation using only angle measurements is polynomial in time. However, the polynomial degree can be high, always at

  4. Scalable antifouling reverse osmosis membranes utilizing perfluorophenyl azide photochemistry.

    PubMed

    McVerry, Brian T; Wong, Mavis C Y; Marsh, Kristofer L; Temple, James A T; Marambio-Jones, Catalina; Hoek, Eric M V; Kaner, Richard B

    2014-09-01

    We present a method to produce anti-fouling reverse osmosis (RO) membranes that maintains the process and scalability of current RO membrane manufacturing. Utilizing perfluorophenyl azide (PFPA) photochemistry, commercial reverse osmosis membranes were dipped into an aqueous solution containing PFPA-terminated poly(ethyleneglycol) species and then exposed to ultraviolet light under ambient conditions, a process that can easily be adapted to a roll-to-roll process. Successful covalent modification of commercial reverse osmosis membranes was confirmed with attenuated total reflectance infrared spectroscopy and contact angle measurements. By employing X-ray photoelectron spectroscopy, it was determined that PFPAs undergo UV-generated nitrene addition and bind to the membrane through an aziridine linkage. After modification with the PFPA-PEG derivatives, the reverse osmosis membranes exhibit high fouling-resistance. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A Scalable Implementation of Van der Waals Density Functionals

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Gygi, Francois

    2010-03-01

    Recently developed Van der Waals density functionals[1] offer the promise to account for weak intermolecular interactions that are not described accurately by local exchange-correlation density functionals. In spite of recent progress [2], the computational cost of such calculations remains high. We present a scalable parallel implementation of the functional proposed by Dion et al.[1]. The method is implemented in the Qbox first-principles simulation code (http://eslab.ucdavis.edu/software/qbox). Application to large molecular systems will be presented. [4pt] [1] M. Dion et al. Phys. Rev. Lett. 92, 246401 (2004).[0pt] [2] G. Roman-Perez and J. M. Soler, Phys. Rev. Lett. 103, 096102 (2009).

  6. Scalable syntheses of the BET bromodomain inhibitor JQ1

    PubMed Central

    Syeda, Shameem Sultana; Jakkaraj, Sudhakar; Georg, Gunda I.

    2015-01-01

    We have developed methods involving the use of alternate, safer reagents for the scalable syntheses of the potent BET bromodomain inhibitor JQ1. A one-pot three step method, involving the conversion of a benzodiazepine to a thioamde using Lawesson’s reagent, followed by amidrazone formation and installation of the triazole moiety furnished JQ1. This method provides good yields and a facile purification process. For the synthesis of enantiomerically enriched (+)-JQ1, the highly toxic reagent diethyl chlorophosphate, used in a previous synthesis, was replaced with the safer reagent diphenyl chlorophosphate in the three-step one-pot triazole formation without effecting yields and enantiomeric purity of (+)-JQ1. PMID:26034331

  7. A Practical and Scalable Tool to Find Overlaps between Sequences

    PubMed Central

    Haj Rachid, Maan

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  8. On the scalability of ring fiber designs for OAM multiplexing.

    PubMed

    Ramachandran, S; Gregg, P; Kristensen, P; Golowich, S E

    2015-02-09

    The promise of the infinite-dimensionality of orbital angular momentum (OAM) and its application to free-space and fiber communications has attracted immense attention in recent years. In order to facilitate OAM-guidance, novel fibers have been proposed and developed, including a class of so-called ring-fibers. In these fibers, the wave-guiding region is a high-index annulus instead of a conventional circular core, which for reasons related to polarization-dependent differential phase shifts for light at waveguide boundaries, leads to enhanced stability for OAM modes. We review the theory and implementation of this nascent class of waveguides, and discuss the opportunities and limitations they present for OAM scalability.

  9. Memory bandwidth-scalable motion estimation for mobile video coding

    NASA Astrophysics Data System (ADS)

    Hsieh, Jui-Hung; Tai, Wei-Cheng; Chang, Tian-Sheuan

    2011-12-01

    The heavy memory access of motion estimation (ME) execution consumes significant power and could limit ME execution when the available memory bandwidth (BW) is reduced because of access congestion or changes in the dynamics of the power environment of modern mobile devices. In order to adapt to the changing BW while maintaining the rate-distortion (R-D) performance, this article proposes a novel data BW-scalable algorithm for ME with mobile multimedia chips. The available BW is modeled in a R-D sense and allocated to fit the dynamic contents. The simulation result shows 70% BW savings while keeping equivalent R-D performance compared with H.264 reference software for low-motion CIF-sized video. For high-motion sequences, the result shows our algorithm can better use the available BW to save an average bit rate of up to 13% with up to 0.1-dB PSNR increase for similar BW usage.

  10. Center for Programming Models for Scalable Parallel Computing

    SciTech Connect

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  11. SCALABLE FUSED LASSO SVM FOR CONNECTOME-BASED DISEASE PREDICTION

    PubMed Central

    Watanabe, Takanori; Scott, Clayton D.; Kessler, Daniel; Angstadt, Michael; Sripada, Chandra S.

    2015-01-01

    There is substantial interest in developing machine-based methods that reliably distinguish patients from healthy controls using high dimensional correlation maps known as functional connectomes (FC's) generated from resting state fMRI. To address the dimensionality of FC's, the current body of work relies on feature selection techniques that are blind to the spatial structure of the data. In this paper, we propose to use the fused Lasso regularized support vector machine to explicitly account for the 6-D structure of the FC (defined by pairs of points in 3-D brain space). In order to solve the resulting nonsmooth and large-scale optimization problem, we introduce a novel and scalable algorithm based on the alternating direction method. Experiments on real resting state scans show that our approach can recover results that are more neuroscientifically informative than previous methods. PMID:25892971

  12. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    SciTech Connect

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory `bus` to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition.

  13. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  14. A scalable sparse eigensolver for petascale applications

    NASA Astrophysics Data System (ADS)

    Keceli, Murat; Zhang, Hong; Zapol, Peter; Dixon, David; Wagner, Albert

    2015-03-01

    Exploiting locality of chemical interactions and therefore sparsity is necessary to push the limits of quantum simulations beyond petascale. However, sparse numerical algorithms are known to have poor strong scaling. Here, we show that shift-and-invert parallel spectral transformations (SIPs) method can scale up to two-hundred thousand cores for density functional based tight-binding (DFTB), or semi-empirical molecular orbital (SEMO) applications. We demonstrated the robustness and scalability of the SIPs method on various kinds of systems including metallic carbon nanotubes, diamond crystals and water clusters. We analyzed how sparsity patterns and eigenvalue spectrums of these different type of applications affect the computational performance of the SIPs. The SIPs method enables us to perform simulations with more than five hundred thousands of basis functions utilizing more than hundreds of thousands of cores. SIPs has a better scaling for memory and computational time in contrast to dense eigensolvers, and it does not require fast interconnects.

  15. Scalable, extensible, and portable numerical libraries

    SciTech Connect

    Gropp, W.; Smith, B.

    1995-01-01

    Designing a scalable and portable numerical library requires consideration of many factors, including choice of parallel communication technology, data structures, and user interfaces. The PETSc library (Portable Extensible Tools for Scientific computing) makes use of modern software technology to provide a flexible and portable implementation. This talk will discuss the use of a meta-communication layer (allowing the user to choose different transport layers such as MPI, p4, pvm, or vendor-specific libraries) for portability, an aggressive data-structure-neutral implementation that minimizes dependence on particular data structures (even vectors), permitting the library to adapt to the user rather than the other way around, and the separation of implementation language from user-interface language. Examples are presented.

  16. Parallel scalability of Hartree–Fock calculations

    SciTech Connect

    Chow, Edmond Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-14

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  17. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  18. Scalable quantum search using trapped ions

    SciTech Connect

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-04-15

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  19. iSIGHT-FD scalability test report.

    SciTech Connect

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  20. BASSET: Scalable Gateway Finder in Large Graphs

    SciTech Connect

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  1. Porphyrins as Catalysts in Scalable Organic Reactions.

    PubMed

    Barona-Castaño, Juan C; Carmona-Vargas, Christian C; Brocksom, Timothy J; de Oliveira, Kleber T

    2016-03-08

    Catalysis is a topic of continuous interest since it was discovered in chemistry centuries ago. Aiming at the advance of reactions for efficient processes, a number of approaches have been developed over the last 180 years, and more recently, porphyrins occupy an important role in this field. Porphyrins and metalloporphyrins are fascinating compounds which are involved in a number of synthetic transformations of great interest for industry and academy. The aim of this review is to cover the most recent progress in reactions catalysed by porphyrins in scalable procedures, thus presenting the state of the art in reactions of epoxidation, sulfoxidation, oxidation of alcohols to carbonyl compounds and C-H functionalization. In addition, the use of porphyrins as photocatalysts in continuous flow processes is covered.

  2. A versatile scalable PET processing system

    SciTech Connect

    H. Dong, A. Weisenberger, J. McKisson, Xi Wenze, C. Cuevas, J. Wilson, L. Zukerman

    2011-06-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  3. Scalable ranked retrieval using document images

    NASA Astrophysics Data System (ADS)

    Jain, Rajiv; Oard, Douglas W.; Doermann, David

    2013-12-01

    Despite the explosion of text on the Internet, hard copy documents that have been scanned as images still play a significant role for some tasks. The best method to perform ranked retrieval on a large corpus of document images, however, remains an open research question. The most common approach has been to perform text retrieval using terms generated by optical character recognition. This paper, by contrast, examines whether a scalable segmentation-free image retrieval algorithm, which matches sub-images containing text or graphical objects, can provide additional benefit in satisfying a user's information needs on a large, real world dataset. Results on 7 million scanned pages from the CDIP v1.0 test collection show that content based image retrieval finds a substantial number of documents that text retrieval misses, and that when used as a basis for relevance feedback can yield improvements in retrieval effectiveness.

  4. A scalable approach to combinatorial library design.

    PubMed

    Sharma, Puneet; Salapaka, Srinivasa; Beck, Carolyn

    2011-01-01

    In this chapter, we describe an algorithm for the design of lead-generation libraries required in combinatorial drug discovery. This algorithm addresses simultaneously the two key criteria of diversity and representativeness of compounds in the resulting library and is computationally efficient when applied to a large class of lead-generation design problems. At the same time, additional constraints on experimental resources are also incorporated in the framework presented in this chapter. A computationally efficient scalable algorithm is developed, where the ability of the deterministic annealing algorithm to identify clusters is exploited to truncate computations over the entire dataset to computations over individual clusters. An analysis of this algorithm quantifies the trade-off between the error due to truncation and computational effort. Results applied on test datasets corroborate the analysis and show improvement by factors as large as ten or more depending on the datasets.

  5. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  6. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  7. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  8. Scalable multiparticle entanglement of trapped ions.

    PubMed

    Häffner, H; Hänsel, W; Roos, C F; Benhelm, J; Chek-al-Kar, D; Chwalla, M; Körber, T; Rapol, U D; Riebe, M; Schmidt, P O; Becher, C; Gühne, O; Dür, W; Blatt, R

    2005-12-01

    The generation, manipulation and fundamental understanding of entanglement lies at the very heart of quantum mechanics. Entangled particles are non-interacting but are described by a common wavefunction; consequently, individual particles are not independent of each other and their quantum properties are inextricably interwoven. The intriguing features of entanglement become particularly evident if the particles can be individually controlled and physically separated. However, both the experimental realization and characterization of entanglement become exceedingly difficult for systems with many particles. The main difficulty is to manipulate and detect the quantum state of individual particles as well as to control the interaction between them. So far, entanglement of four ions or five photons has been demonstrated experimentally. The creation of scalable multiparticle entanglement demands a non-exponential scaling of resources with particle number. Among the various kinds of entangled states, the 'W state' plays an important role as its entanglement is maximally persistent and robust even under particle loss. Such states are central as a resource in quantum information processing and multiparty quantum communication. Here we report the scalable and deterministic generation of four-, five-, six-, seven- and eight-particle entangled states of the W type with trapped ions. We obtain the maximum possible information on these states by performing full characterization via state tomography, using individual control and detection of the ions. A detailed analysis proves that the entanglement is genuine. The availability of such multiparticle entangled states, together with full information in the form of their density matrices, creates a test-bed for theoretical studies of multiparticle entanglement. Independently, 'Greenberger-Horne-Zeilinger' entangled states with up to six ions have been created and analysed in Boulder.

  9. Scalable asynchronous execution of cellular automata

    NASA Astrophysics Data System (ADS)

    Folino, Gianluigi; Giordano, Andrea; Mastroianni, Carlo

    2016-10-01

    The performance and scalability of cellular automata, when executed on parallel/distributed machines, are limited by the necessity of synchronizing all the nodes at each time step, i.e., a node can execute only after the execution of the previous step at all the other nodes. However, these synchronization requirements can be relaxed: a node can execute one step after synchronizing only with the adjacent nodes. In this fashion, different nodes can execute different time steps. This can be a notable advantageous in many novel and increasingly popular applications of cellular automata, such as smart city applications, simulation of natural phenomena, etc., in which the execution times can be different and variable, due to the heterogeneity of machines and/or data and/or executed functions. Indeed, a longer execution time at a node does not slow down the execution at all the other nodes but only at the neighboring nodes. This is particularly advantageous when the nodes that act as bottlenecks vary during the application execution. The goal of the paper is to analyze the benefits that can be achieved with the described asynchronous implementation of cellular automata, when compared to the classical all-to-all synchronization pattern. The performance and scalability have been evaluated through a Petri net model, as this model is very useful to represent the synchronization barrier among nodes. We examined the usual case in which the territory is partitioned into a number of regions, and the computation associated with a region is assigned to a computing node. We considered both the cases of mono-dimensional and two-dimensional partitioning. The results show that the advantage obtained through the asynchronous execution, when compared to the all-to-all synchronous approach is notable, and it can be as large as 90% in terms of speedup.

  10. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    . The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.

  11. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  12. ParaText : scalable text modeling and analysis.

    SciTech Connect

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  13. Scalable Conjunction Processing using Spatiotemporally Indexed Ephemeris Data

    NASA Astrophysics Data System (ADS)

    Budianto-Ho, I.; Johnson, S.; Sivilli, R.; Alberty, C.; Scarberry, R.

    2014-09-01

    The collision warnings produced by the Joint Space Operations Center (JSpOC) are of critical importance in protecting U.S. and allied spacecraft against destructive collisions and protecting the lives of astronauts during space flight. As the Space Surveillance Network (SSN) improves its sensor capabilities for tracking small and dim space objects, the number of tracked objects increases from thousands to hundreds of thousands of objects, while the number of potential conjunctions increases with the square of the number of tracked objects. Classical filtering techniques such as apogee and perigee filters have proven insufficient. Novel and orders of magnitude faster conjunction analysis algorithms are required to find conjunctions in a timely manner. Stellar Science has developed innovative filtering techniques for satellite conjunction processing using spatiotemporally indexed ephemeris data that efficiently and accurately reduces the number of objects requiring high-fidelity and computationally-intensive conjunction analysis. Two such algorithms, one based on the k-d Tree pioneered in robotics applications and the other based on Spatial Hash Tables used in computer gaming and animation, use, at worst, an initial O(N log N) preprocessing pass (where N is the number of tracked objects) to build large O(N) spatial data structures that substantially reduce the required number of O(N^2) computations, substituting linear memory usage for quadratic processing time. The filters have been implemented as Open Services Gateway initiative (OSGi) plug-ins for the Continuous Anomalous Orbital Situation Discriminator (CAOS-D) conjunction analysis architecture. We have demonstrated the effectiveness, efficiency, and scalability of the techniques using a catalog of 100,000 objects, an analysis window of one day, on a 64-core computer with 1TB shared memory. Each algorithm can process the full catalog in 6 minutes or less, almost a twenty-fold performance improvement over the

  14. Scalable desktop visualisation of very large radio astronomy data cubes

    NASA Astrophysics Data System (ADS)

    Perkins, Simon; Questiaux, Jacques; Finniss, Stephen; Tyler, Robin; Blyth, Sarah; Kuttel, Michelle M.

    2014-07-01

    Observation data from radio telescopes is typically stored in three (or higher) dimensional data cubes, the resolution, coverage and size of which continues to grow as ever larger radio telescopes come online. The Square Kilometre Array, tabled to be the largest radio telescope in the world, will generate multi-terabyte data cubes - several orders of magnitude larger than the current norm. Despite this imminent data deluge, scalable approaches to file access in Astronomical visualisation software are rare: most current software packages cannot read astronomical data cubes that do not fit into computer system memory, or else provide access only at a serious performance cost. In addition, there is little support for interactive exploration of 3D data. We describe a scalable, hierarchical approach to 3D visualisation of very large spectral data cubes to enable rapid visualisation of large data files on standard desktop hardware. Our hierarchical approach, embodied in the AstroVis prototype, aims to provide a means of viewing large datasets that do not fit into system memory. The focus is on rapid initial response: our system initially rapidly presents a reduced, coarse-grained 3D view of the data cube selected, which is gradually refined. The user may select sub-regions of the cube to be explored in more detail, or extracted for use in applications that do not support large files. We thus shift the focus from data analysis informed by narrow slices of detailed information, to analysis informed by overview information, with details on demand. Our hierarchical solution to the rendering of large data cubes reduces the overall time to complete file reading, provides user feedback during file processing and is memory efficient. This solution does not require high performance computing hardware and can be implemented on any platform supporting the OpenGL rendering library.

  15. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    PubMed

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  16. A Scalable Framework to Detect Personal Health Mentions on Twitter

    PubMed Central

    Fabbri, Daniel; Rosenbloom, S Trent

    2015-01-01

    Background Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual’s health. Objective The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. Methods We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Results Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about

  17. SUPREM-DSMC: A New Scalable, Parallel, Reacting, Multidimensional Direct Simulation Monte Carlo Flow Code

    NASA Technical Reports Server (NTRS)

    Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas

    2000-01-01

    An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.

  18. SUPREM-DSMC: A New Scalable, Parallel, Reacting, Multidimensional Direct Simulation Monte Carlo Flow Code

    NASA Technical Reports Server (NTRS)

    Campbell, David; Wysong, Ingrid; Kaplan, Carolyn; Mott, David; Wadsworth, Dean; VanGilder, Douglas

    2000-01-01

    An AFRL/NRL team has recently been selected to develop a scalable, parallel, reacting, multidimensional (SUPREM) Direct Simulation Monte Carlo (DSMC) code for the DoD user community under the High Performance Computing Modernization Office (HPCMO) Common High Performance Computing Software Support Initiative (CHSSI). This paper will introduce the JANNAF Exhaust Plume community to this three-year development effort and present the overall goals, schedule, and current status of this new code.

  19. READSCAN: a fast and scalable pathogen discovery program with accurate genome relative abundance estimation.

    PubMed

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2013-02-01

    READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). http://cbrc.kaust.edu.sa/readscan.

  20. Scalable Tight-Binding Model for Graphene

    NASA Astrophysics Data System (ADS)

    Liu, Ming-Hao; Rickhaus, Peter; Makk, Péter; Tóvári, Endre; Maurand, Romain; Tkatschenko, Fedor; Weiss, Markus; Schönenberger, Christian; Richter, Klaus

    2015-01-01

    Artificial graphene consisting of honeycomb lattices other than the atomic layer of carbon has been shown to exhibit electronic properties similar to real graphene. Here, we reverse the argument to show that transport properties of real graphene can be captured by simulations using "theoretical artificial graphene." To prove this, we first derive a simple condition, along with its restrictions, to achieve band structure invariance for a scalable graphene lattice. We then present transport measurements for an ultraclean suspended single-layer graphene p n junction device, where ballistic transport features from complex Fabry-Pérot interference (at zero magnetic field) to the quantum Hall effect (at unusually low field) are observed and are well reproduced by transport simulations based on properly scaled single-particle tight-binding models. Our findings indicate that transport simulations for graphene can be efficiently performed with a strongly reduced number of atomic sites, allowing for reliable predictions for electric properties of complex graphene devices. We demonstrate the capability of the model by applying it to predict so-far unexplored gate-defined conductance quantization in single-layer graphene.

  1. SCTP as scalable video coding transport

    NASA Astrophysics Data System (ADS)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  2. Deep Hashing for Scalable Image Search.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2017-05-01

    In this paper, we propose a new deep hashing (DH) approach to learn compact binary codes for scalable image search. Unlike most existing binary codes learning methods, which usually seek a single linear projection to map each sample into a binary feature vector, we develop a deep neural network to seek multiple hierarchical non-linear transformations to learn these binary codes, so that the non-linear relationship of samples can be well exploited. Our model is learned under three constraints at the top layer of the developed deep network: 1) the loss between the compact real-valued code and the learned binary vector is minimized, 2) the binary codes distribute evenly on each bit, and 3) different bits are as independent as possible. To further improve the discriminative power of the learned binary codes, we extend DH into supervised DH (SDH) and multi-label SDH by including a discriminative term into the objective function of DH, which simultaneously maximizes the inter-class variations and minimizes the intra-class variations of the learned binary codes with the single-label and multi-label settings, respectively. Extensive experimental results on eight widely used image search data sets show that our proposed methods achieve very competitive results with the state-of-the-arts.

  3. Scalability and interoperability within glideinWMS

    SciTech Connect

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  4. Scalability and interoperability within glideinWMS

    NASA Astrophysics Data System (ADS)

    Bradley, D.; Sfiligoi, I.; Padhi, S.; Frey, J.; Tannenbaum, T.

    2010-04-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  5. SCAN: A Scalable Model of Attentional Selection.

    PubMed

    Hudson, Patrick T.W.; van den Herik, H Jaap; Postma, Eric O.

    1997-08-01

    This paper describes the SCAN (Signal Channelling Attentional Network) model, a scalable neural network model for attentional scanning. The building block of SCAN is a gating lattice, a sparsely-connected neural network defined as a special case of the Ising lattice from statistical mechanics. The process of spatial selection through covert attention is interpreted as a biological solution to the problem of translation-invariant pattern processing. In SCAN, a sequence of pattern translations combines active selection with translation-invariant processing. Selected patterns are channelled through a gating network, formed by a hierarchical fractal structure of gating lattices, and mapped onto an output window. We show how the incorporation of an expectation-generating classifier network (e.g. Carpenter and Grossberg's ART network) into SCAN allows attentional selection to be driven by expectation. Simulation studies show the SCAN model to be capable of attending and identifying object patterns that are part of a realistically sized natural image. Copyright 1997 Elsevier Science Ltd.

  6. Sparse Distributed Representation and Hierarchy: Keys to Scalable Machine Intelligence

    DTIC Science & Technology

    2016-04-01

    AFRL-RY-WP-TR-2016-0030 SPARSE DISTRIBUTED REPRESENTATION & HIERARCHY: KEYS TO SCALABLE MACHINE INTELLIGENCE Gerard (Rod) Rinkus, Greg...REPRESENTATION & HIERARCHY: KEYS TO SCALABLE MACHINE INTELLIGENCE 5a. CONTRACT NUMBER FA8650-13-C-7342 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...classification accuracy on the Weizmann data set, accomplished with 3.5 minutes training time, with no machine parallelism and almost no software

  7. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  8. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    NASA Technical Reports Server (NTRS)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  9. Toward Scalable Ion Traps for Quantum Information Processing

    DTIC Science & Technology

    2010-01-01

    Deterministic quantum teleportation of atomic qubits Nature 429 737 [15] Jost J D, Home J P, Amini J M, Hanneke D, Ozeri R, Langer C, Bollinger J J, Leibfried...Toward scalable ion traps for quantum information processing This article has been downloaded from IOPscience. Please scroll down to see the full...AND SUBTITLE Toward Scalable ion Traps For Quantum Information Processing 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR

  10. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Simple and scalable method for peptide inhalable powder production.

    PubMed

    Schoubben, Aurélie; Blasi, Paolo; Giovagnoli, Stefano; Ricci, Maurizio; Rossi, Carlo

    2010-01-31

    The aim of this work was to produce capreomycin dry powder and capreomycin loaded PLGA microparticles intended for tuberculosis inhalation therapy, using simple and scalable methods. Capreomycin physico-chemical characteristics have been modified by hydrophobic ion pairing with oleate. The powder suspension was processed by high pressure homogenization and spray-dried. Spray-drying was also used to prepare capreomycin oleate (CO) loaded PLGA microparticles. CO powder was suspended in the organic phase containing PLGA and the suspension was spray-dried. Particle dimensions were determined using photon correlation spectroscopy and Accusizer C770. Morphology was investigated by scanning electron microscopy (SEM) and capreomycin content by spectrophotometry. Capreomycin properties were modified to increase polymeric microparticle content and obtain respirable CO powder. High pressure homogenization allowed to reduce CO particle dimensions obtaining a population in the micrometric (6.18 microm) and one in the nanometric (approximately 317 nm) range. SEM pictures showed not perfectly spherical particles with a wrinkled surface, generally suitable for inhalation. PLGA particles were characterized by a high encapsulation efficiency (about 90%) and dimensions (approximately 6.69 microm) suitable for inhalation. Concluding, two different formulations were successfully developed for capreomycin pulmonary delivery. The hydrophobic ion pair strategy led to a noticeable drug content increase.

  12. Scalable Design of Paired CRISPR Guide RNAs for Genomic Deletion

    PubMed Central

    Polidori, Taisia; Palumbo, Emilio; Guigo, Roderic

    2017-01-01

    CRISPR-Cas9 technology can be used to engineer precise genomic deletions with pairs of single guide RNAs (sgRNAs). This approach has been widely adopted for diverse applications, from disease modelling of individual loci, to parallelized loss-of-function screens of thousands of regulatory elements. However, no solution has been presented for the unique bioinformatic design requirements of CRISPR deletion. We here present CRISPETa, a pipeline for flexible and scalable paired sgRNA design based on an empirical scoring model. Multiple sgRNA pairs are returned for each target, and any number of targets can be analyzed in parallel, making CRISPETa equally useful for focussed or high-throughput studies. Fast run-times are achieved using a pre-computed off-target database. sgRNA pair designs are output in a convenient format for visualisation and oligonucleotide ordering. We present pre-designed, high-coverage library designs for entire classes of protein-coding and non-coding elements in human, mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. In human cells, we reproducibly observe deletion efficiencies of ≥50% for CRISPETa designs targeting an enhancer and exonic fragment of the MALAT1 oncogene. In the latter case, deletion results in production of desired, truncated RNA. CRISPETa will be useful for researchers seeking to harness CRISPR for targeted genomic deletion, in a variety of model organisms, from single-target to high-throughput scales. PMID:28253259

  13. Scalable Design of Paired CRISPR Guide RNAs for Genomic Deletion.

    PubMed

    Pulido-Quetglas, Carlos; Aparicio-Prat, Estel; Arnan, Carme; Polidori, Taisia; Hermoso, Toni; Palumbo, Emilio; Ponomarenko, Julia; Guigo, Roderic; Johnson, Rory

    2017-03-01

    CRISPR-Cas9 technology can be used to engineer precise genomic deletions with pairs of single guide RNAs (sgRNAs). This approach has been widely adopted for diverse applications, from disease modelling of individual loci, to parallelized loss-of-function screens of thousands of regulatory elements. However, no solution has been presented for the unique bioinformatic design requirements of CRISPR deletion. We here present CRISPETa, a pipeline for flexible and scalable paired sgRNA design based on an empirical scoring model. Multiple sgRNA pairs are returned for each target, and any number of targets can be analyzed in parallel, making CRISPETa equally useful for focussed or high-throughput studies. Fast run-times are achieved using a pre-computed off-target database. sgRNA pair designs are output in a convenient format for visualisation and oligonucleotide ordering. We present pre-designed, high-coverage library designs for entire classes of protein-coding and non-coding elements in human, mouse, zebrafish, Drosophila melanogaster and Caenorhabditis elegans. In human cells, we reproducibly observe deletion efficiencies of ≥50% for CRISPETa designs targeting an enhancer and exonic fragment of the MALAT1 oncogene. In the latter case, deletion results in production of desired, truncated RNA. CRISPETa will be useful for researchers seeking to harness CRISPR for targeted genomic deletion, in a variety of model organisms, from single-target to high-throughput scales.

  14. MediAgent: a WWW-based scalable and self-learning medical search engine.

    PubMed Central

    Tay, J.; Ke, S.; Lun, K. C.

    1998-01-01

    Searching for medical information on the Internet can be tedious and frustrating due to the number of irrelevant entries returned from generic search engines. We have developed MediAgent, a scalable search engine that aims to deliver a web-based medical search solution which is focused, exhaustive and able to keep improving its databases. The software package can run off a single low-end system and be scaled into a client-server, distributed computing architecture for high-end needs. This scalable architecture boosts MediAgent's handling capacity to tens of millions of web pages. In addition to large volume handling, MediAgent is designed to be manageable. All subsystems are not only highly configurable, but also support remote, interactive management and monitoring by the system administrator. PMID:9929289

  15. Scalable tensor factorizations with missing data.

    SciTech Connect

    Morup, Morten; Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    2010-04-01

    The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a first-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

  16. Memory-Scalable GPU Spatial Hierarchy Construction.

    PubMed

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  17. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  18. Scalable persistent identifier systems for dynamic datasets

    NASA Astrophysics Data System (ADS)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  19. High-Power Zinc-Air Energy Storage: Enhanced Metal-Air Energy Storage System with Advanced Grid-Interoperable Power Electronics Enabling Scalability and Ultra-Low Cost

    SciTech Connect

    2010-10-01

    GRIDS Project: Fluidic is developing a low-cost, rechargeable, high-power module for Zinc-air batteries that will be used to store renewable energy. Zinc-air batteries are traditionally found in small, non-rechargeable devices like hearing aids because they are well-suited to delivering low levels of power for long periods of time. Historically, Zinc-air batteries have not been as useful for applications which require periodic bursts of power, like on the electrical grid. Fluidic hopes to fill this need by combining the high energy, low cost, and long run-time of a Zinc-air battery with new chemistry providing high power, high efficiency, and fast response. The battery module could allow large grid-storage batteries to provide much more power on very short demand—the most costly kind of power for utilities—and with much more versatile performance.

  20. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method

    NASA Astrophysics Data System (ADS)

    Kruglyakov, M.; Geraskin, A.; Kuvshinov, A.

    2016-11-01

    We present a novel, open source 3-D MT forward solver based on a method of integral equations (IE) with contracting kernel. Special attention in the solver is paid to accurate calculations of Green's functions and their integrals which are cornerstones of any IE solution. The solver supports massive parallelization and is able to deal with highly detailed and contrasting models. We report results of a 3-D numerical experiment aimed at analyzing the accuracy and scalability of the code.

  1. A Scalable Method for Regioselective 3-Acylation of 2-Substituted Indoles under Basic Conditions.

    PubMed

    Johansson, Henrik; Urruticoechea, Andoni; Larsen, Inna; Sejer Pedersen, Daniel

    2015-01-02

    Privileged structures such as 2-arylindoles are recurrent molecular scaffolds in bioactive molecules. We here present an operationally simple, high yielding and scalable method for regioselective 3-acylation of 2-substituted indoles under basic conditions using functionalized acid chlorides. The method shows good tolerance to both electron-withdrawing and donating substituents on the indole scaffold and gives ready access to a variety of functionalized 3-acylindole building blocks suited for further derivatization.

  2. A scalable, fully automated process for construction of sequence-ready human exome targeted capture libraries

    PubMed Central

    2011-01-01

    Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol. PMID:21205303

  3. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  4. Superconductor digital electronics: Scalability and energy efficiency issues (Review Article)

    NASA Astrophysics Data System (ADS)

    Tolpygo, Sergey K.

    2016-05-01

    Superconductor digital electronics using Josephson junctions as ultrafast switches and magnetic-flux encoding of information was proposed over 30 years ago as a sub-terahertz clock frequency alternative to semiconductor electronics based on complementary metal-oxide-semiconductor (CMOS) transistors. Recently, interest in developing superconductor electronics has been renewed due to a search for energy saving solutions in applications related to high-performance computing. The current state of superconductor electronics and fabrication processes are reviewed in order to evaluate whether this electronics is scalable to a very large scale integration (VLSI) required to achieve computation complexities comparable to CMOS processors. A fully planarized process at MIT Lincoln Laboratory, perhaps the most advanced process developed so far for superconductor electronics, is used as an example. The process has nine superconducting layers: eight Nb wiring layers with the minimum feature size of 350 nm, and a thin superconducting layer for making compact high-kinetic-inductance bias inductors. All circuit layers are fully planarized using chemical mechanical planarization (CMP) of SiO2 interlayer dielectric. The physical limitations imposed on the circuit density by Josephson junctions, circuit inductors, shunt and bias resistors, etc., are discussed. Energy dissipation in superconducting circuits is also reviewed in order to estimate whether this technology, which requires cryogenic refrigeration, can be energy efficient. Fabrication process development required for increasing the density of superconductor digital circuits by a factor of ten and achieving densities above 107 Josephson junctions per cm2 is described.

  5. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    SciTech Connect

    Mohror, Kathryn; Moody, Adam; Bronevetsky, Greg; de Supinski, Bronis R.

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  6. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  7. Scalable video compression using longer motion compensated temporal filters

    NASA Astrophysics Data System (ADS)

    Golwelkar, Abhijeet V.; Woods, John W.

    2003-06-01

    Three-dimensional (3-D) subband/wavelet coding using a motion compensated temporal filter (MCTF) is emerging as a very effective structure for highly scalable video coding. Most previous work has used two-tap Haar filters for the temporal analysis/synthesis. To make better use of the temporal redundancies, we are proposing an MCTF scheme based on longer biorthogonal filters. We show a lifting based coder capable of subpixel accurate motion compensation. If we retain the fixed size GOP structure of the Haar filter MCTFs, we need to use symmetric extensions at both ends of the GOP. This gives rise to loss of coding efficiency at the GOP boundaries resulting in significant PSNR drops there. This performance can be considerably improved by using a 'sliding window,' in place of the GOP block. We employ the 5/3 filter and its non-orthogonality causes PSNR variation, which can be reduced by employing filter-based weighting coefficients. Overall the longer filters have a higher coding gain than the Haar filters and show significant improvement in average PSNR at high bit rates. However, a doubling in the number of motion vectors to be transmitted, translates to a drop in PSNR at the lower video bit rates.

  8. Jumping-Droplet-Enhanced Condensation on Scalable Superhydrophobic Nanostructured Surfaces

    NASA Astrophysics Data System (ADS)

    Miljkovic, Nenad; Enright, Ryan; Nam, Youngsuk; Lopez, Ken; Dou, Nicholas; Sack, Jean; Wang, Evelyn

    2013-03-01

    When droplets coalesce on a superhydrophobic nanostructured surface, the resulting droplet can jump from the surface due to the release of excess surface energy. If designed properly, these superhydrophobic nanostructured surfaces can not only allow for easy droplet removal at micrometric length scales during condensation but promise to enhance heat transfer performance. However, the rationale for the design of an ideal nanostructured surface, as well as heat transfer experiments demonstrating the advantage of this jumping behavior are lacking. Here, we show that silanized copper oxide surfaces created via a simple fabrication method can achieve highly efficient jumping-droplet condensation heat transfer. We experimentally demonstrated a 25% higher overall heat flux and 30% higher condensation heat transfer coefficient compared to state-of-the-art hydrophobic condensing surfaces at low supersaturations. This work not only shows significant condensation heat transfer enhancement, but promises a low cost and scalable approach to increase efficiency for applications such as atmospheric water harvesting and dehumidification. Furthermore, the results offer insights and an avenue to achieve high flux superhydrophobic condensation.

  9. Jumping-droplet-enhanced condensation on scalable superhydrophobic nanostructured surfaces.

    PubMed

    Miljkovic, Nenad; Enright, Ryan; Nam, Youngsuk; Lopez, Ken; Dou, Nicholas; Sack, Jean; Wang, Evelyn N

    2013-01-09

    When droplets coalesce on a superhydrophobic nanostructured surface, the resulting droplet can jump from the surface due to the release of excess surface energy. If designed properly, these superhydrophobic nanostructured surfaces can not only allow for easy droplet removal at micrometric length scales during condensation but also promise to enhance heat transfer performance. However, the rationale for the design of an ideal nanostructured surface as well as heat transfer experiments demonstrating the advantage of this jumping behavior are lacking. Here, we show that silanized copper oxide surfaces created via a simple fabrication method can achieve highly efficient jumping-droplet condensation heat transfer. We experimentally demonstrated a 25% higher overall heat flux and 30% higher condensation heat transfer coefficient compared to state-of-the-art hydrophobic condensing surfaces at low supersaturations (<1.12). This work not only shows significant condensation heat transfer enhancement but also promises a low cost and scalable approach to increase efficiency for applications such as atmospheric water harvesting and dehumidification. Furthermore, the results offer insights and an avenue to achieve high flux superhydrophobic condensation.

  10. Jumping-Droplet-Enhanced Condensation on Scalable Superhydrophobic Nanostructured Surfaces

    SciTech Connect

    Miljkovic, N; Enright, R; Nam, Y; Lopez, K; Dou, N; Sack, J; Wang, E

    2013-01-09

    When droplets coalesce on a superhydrophobic nanostructured surface, the resulting droplet can jump from the surface due to the release of excess surface energy. If designed properly, these superhydrophobic nanostructured surfaces can not only allow for easy droplet removal at micrometric length scales during condensation but also promise to enhance heat transfer performance. However, the rationale for the design of an ideal nanostructured surface as well as heat transfer experiments demonstrating the advantage of this jumping behavior are lacking. Here, we show that silanized copper oxide surfaces created via a simple fabrication method can achieve highly efficient jumping-droplet condensation heat transfer. We experimentally demonstrated a 25% higher overall heat flux and 30% higher condensation heat transfer coefficient compared to state-of-the-art hydrophobic condensing surfaces at low supersaturations (<1.12). This work not only shows significant condensation heat transfer enhancement but also promises a low cost and scalable approach to increase efficiency for applications such as atmospheric water harvesting and dehumidification. Furthermore, the results offer insights and an avenue to achieve high flux superhydrophobic condensation.

  11. Toward optimized light utilization in nanowire arrays using scalable nanosphere lithography and selected area growth.

    PubMed

    Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu

    2012-06-13

    Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.

  12. Hierarchical oriented predictions for resolution scalable lossless and near-lossless compression of CT and MRI biomedical images.

    PubMed

    Taquet, Jonathan; Labit, Claude

    2012-05-01

    We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.

  13. A scalable climate health justice assessment model.

    PubMed

    McDonald, Yolanda J; Grineski, Sara E; Collins, Timothy W; Kim, Young-An

    2015-05-01

    This paper introduces a scalable "climate health justice" model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008-2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  14. Scalable Designs for Planar Ion Trap Arrays

    NASA Astrophysics Data System (ADS)

    Slusher, R. E.

    2007-03-01

    , ``Architecture for a large-scale ion-trap quantum computer,'' Nature, Vol.417, pp.709--711, (2002). S. Seidelin, J. Chiaverini, R. Reicle, J. J. Bollinger, D. Leibfried, J. Briton, J. H. Wesenberg, R. B. Blakestad, R. J. Epstein, D. B. Hume, J. D. Jost, C. Langer, R. Ozeri, N. Shiga, and D. J. Wineland, ``Amicrofabricated surface-electrode ion trap for scalable quantum informtion processing,'' quant-ph/0601173, (2006). J. Kim, S. Pau, Z. Ma, H.R. McLellan, J.V. Gates, A. Kornblit, and R.E. Slusher, ``System design for large-scale ion trap quantum information processor,'' Quantum Inf. Comput., Vol 5, pp 515--537, (2005).

  15. A scalable climate health justice assessment model

    PubMed Central

    McDonald, Yolanda J.; Grineski, Sara E.; Collins, Timothy W.; Kim, Young-An

    2014-01-01

    This paper introduces a scalable “climate health justice” model for assessing and projecting incidence, treatment costs, and sociospatial disparities for diseases with well-documented climate change linkages. The model is designed to employ low-cost secondary data, and it is rooted in a perspective that merges normative environmental justice concerns with theoretical grounding in health inequalities. Since the model employs International Classification of Diseases, Ninth Revision Clinical Modification (ICD-9-CM) disease codes, it is transferable to other contexts, appropriate for use across spatial scales, and suitable for comparative analyses. We demonstrate the utility of the model through analysis of 2008–2010 hospitalization discharge data at state and county levels in Texas (USA). We identified several disease categories (i.e., cardiovascular, gastrointestinal, heat-related, and respiratory) associated with climate change, and then selected corresponding ICD-9 codes with the highest hospitalization counts for further analyses. Selected diseases include ischemic heart disease, diarrhea, heat exhaustion/cramps/stroke/syncope, and asthma. Cardiovascular disease ranked first among the general categories of diseases for age-adjusted hospital admission rate (5286.37 per 100,000). In terms of specific selected diseases (per 100,000 population), asthma ranked first (517.51), followed by ischemic heart disease (195.20), diarrhea (75.35), and heat exhaustion/cramps/stroke/syncope (7.81). Charges associated with the selected diseases over the 3-year period amounted to US$5.6 billion. Blacks were disproportionately burdened by the selected diseases in comparison to non-Hispanic whites, while Hispanics were not. Spatial distributions of the selected disease rates revealed geographic zones of disproportionate risk. Based upon a downscaled regional climate-change projection model, we estimate a >5% increase in the incidence and treatment costs of asthma attributable to

  16. Scalable wideband equivalent circuit model for silicon-based on-chip transmission lines

    NASA Astrophysics Data System (ADS)

    Wang, Hansheng; He, Weiliang; Zhang, Minghui; Tanh, Lu

    2017-06-01

    A scalable wideband equivalent circuit model of silicon-based on-chip transmission lines is presented in this paper along with an efficient analytical parameter extraction method based on improved characteristic function approach, including a relevant equation to reduce the deviation caused by approximation. The model consists of both series and shunt lumped elements and accounts for high-order parasitic effects. The equivalent circuit model is derived and verified to recover the frequency-dependent parameters at a range from direct current to 50 GHz accurately. The scalability of the model is proved by comparing simulated and measured scattering parameters with the method of cascade, attaining excellent results based on samples made from CMOS 0.13 and 0.18 μm process. Project supported by National Natural Science Foundation of China (No. 61674036).

  17. Development and Performance of a Scalable Version of a Nonhydrostatic Atmospheric Model

    SciTech Connect

    Mirin, A A; Sugiyama, G A; Chen, S; Hodur, R M; Holt, T R; Schmidt, J M

    2001-06-07

    The atmospheric forecast model of the Naval Research Laboratory's (NRL) Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) has been developed into a parallel, scalable model in a joint collaborative effort with Lawrence Livermore National Laboratory (LLNL). The new version of COAMPS has become the standard model of use at NRL and in LLNL's Atmospheric Science Division. The main purpose of this enterprise has been to take advantage of emerging scalable technology, to treat finer spatial and temporal resolutions needed in complex topographical or atmospheric conditions, as well as to allow the utilization of improved but computationally expensive physics packages. The parallel implementation facilitates the ability to provide real-time, high-resolution, multi-day numerical weather predictions for forecaster guidance, input to atmospheric dispersion simulations, and forecast ensembles.

  18. Lilith: A software framework for the rapid development of scalable tools for distributed computing

    SciTech Connect

    Gentile, A.C.; Evensky, D.A.; Armstrong, R.C.

    1997-12-31

    Lilith is a general purpose tool that provides a highly scalable, easy distribution of user code across a heterogeneous computing platform. By handling the details of code distribution and communication, such a framework allows for the rapid development of tools for the use and management of large distributed systems. This speed-up in development not only enables the easy creation of tools as needed but also facilitates the ultimate development of more refined, hard-coded tools as well. Lilith is written in Java, providing platform independence and further facilitating rapid tool development through Object reuse and ease of development. The authors present the user-involved objects in the Lilith Distributed Object System and the Lilith User API. They present an example of tool development, illustrating the user calls, and present results demonstrating Lilith`s scalability.

  19. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    SciTech Connect

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  20. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    SciTech Connect

    Hively, Lee M; Sheldon, Frederick T

    2011-01-01

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps toward scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.

  1. Toward Automatic Scalability Analysis of Message Passing Programs: A Case Study

    NASA Technical Reports Server (NTRS)

    Sarukkai, Sekhar R.; Mehra, Pankaj; Block, Robert; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Scalability analysis forms an important component of any performance debugging cycle, for massively parallel machines. However, tools that help in performing such analysis for parallel programs are non-existent. The primary reason for lack of such tools is the complexity involved in capturing program dynamics such as communication-computation overlap, communication latencies and memory hierarchy reference patterns. In this paper, we highlight some simple techniques that can be used to study scalability of explicit message-passing parallel programs that consider the above issues. We start from the high level source code and use a methodology for deducing communication characteristics and its impact on the total execution time of the program. The approach is validated with the help of a pipelined method for solving scalar tri-diagonal systems, using both simulations and symbolic cost models on the Intel hypercube.

  2. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures. Final Report

    SciTech Connect

    Gropp, William D.

    2014-06-23

    With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.

  3. A robust and scalable microfluidic metering method that allows protein crystal growth by free interface diffusion

    NASA Astrophysics Data System (ADS)

    Hansen, Carl L.; Skordalakes, Emmanuel; Berger, James M.; Quake, Stephen R.

    2002-12-01

    Producing robust and scalable fluid metering in a microfluidic device is a challenging problem. We developed a scheme for metering fluids on the picoliter scale that is scalable to highly integrated parallel architectures and is independent of the properties of the working fluid. We demonstrated the power of this method by fabricating and testing a microfluidic chip for rapid screening of protein crystallization conditions, a major hurdle in structural biology efforts. The chip has 480 active valves and performs 144 parallel reactions, each of which uses only 10 nl of protein sample. The properties of microfluidic mixing allow an efficient kinetic trajectory for crystallization, and the microfluidic device outperforms conventional techniques by detecting more crystallization conditions while using 2 orders of magnitude less protein sample. We demonstrate that diffraction-quality crystals may be grown and harvested from such nanoliter-volume reactions.

  4. Scalable Sensor Data Processor: A Multi-Core Payload Data Processor ASIC

    NASA Astrophysics Data System (ADS)

    Berrojo, L.; Moreno, R.; Regada, R.; Garcia, E.; Trautner, R.; Rauwerda, G.; Sunesen, K.; He, Y.; Redant, S.; Thys, G.; Andersson, J.; Habinc, S.

    2015-09-01

    The Scalable Sensor Data Processor (SSDP) project, under ESA contract and with TAS-E as prime contractor, targets the development of a multi-core ASIC for payload data processing to be used, among other terrestrial and space application areas, in future scientific and exploration missions with harsh radiation environments. The SSDP is a mixed-signal heterogeneous multi-core System-on-Chip (SoC). It combines GPP and NoC-based DSP subsystems with on-chip ADCs and several standard space I/Fs to make a flexible, configurable and scalable device. The NoC comprises two state-of-the-art fixed point Xentium® DSP processors, providing the device with high data processing capabilities.

  5. Generation of scalable terahertz radiation from cylindrically focused two-color laser pulses in air

    SciTech Connect

    Kuk, D.; Yoo, Y. J.; Rosenthal, E. W.; Jhajj, N.; Milchberg, H. M.; Kim, K. Y.

    2016-03-21

    We demonstrate scalable terahertz (THz) generation by focusing terawatt, two-color laser pulses in air with a cylindrical lens. This focusing geometry creates a two-dimensional air plasma sheet, which yields two diverging THz lobe profiles in the far field. This setup can avoid plasma-induced laser defocusing and subsequent THz saturation, previously observed with spherical lens focusing of high-power laser pulses. By expanding the plasma source into a two-dimensional sheet, cylindrical focusing can lead to scalable THz generation. This scheme provides an energy conversion efficiency of 7 × 10{sup −4}, ∼7 times better than spherical lens focusing. The diverging THz lobes are refocused with a combination of cylindrical and parabolic mirrors to produce strong THz fields (>21 MV/cm) at the focal point.

  6. Interface-Free Area-Scalable Self-Powered Electroluminescent System Driven by Triboelectric Generator.

    PubMed

    Wei, Xiao Yan; Kuang, Shuang Yang; Li, Hua Yang; Pan, Caofeng; Zhu, Guang; Wang, Zhong Lin

    2015-09-04

    Self-powered system that is interface-free is greatly desired for area-scalable application. Here we report a self-powered electroluminescent system that consists of a triboelectric generator (TEG) and a thin-film electroluminescent (TFEL) lamp. The TEG provides high-voltage alternating electric output, which fits in well with the needs of the TFEL lamp. Induced charges pumped onto the lamp by the TEG generate an electric field that is sufficient to excite luminescence without an electrical interface circuit. Through rational serial connection of multiple TFEL lamps, effective and area-scalable luminescence is realized. It is demonstrated that multiple types of TEGs are applicable to the self-powered system, indicating that the system can make use of diverse mechanical sources and thus has potentially broad applications in illumination, display, entertainment, indication, surveillance and many others.

  7. Interface-Free Area-Scalable Self-Powered Electroluminescent System Driven by Triboelectric Generator

    PubMed Central

    Yan Wei, Xiao; Kuang, Shuang Yang; Yang Li, Hua; Pan, Caofeng; Zhu, Guang; Wang, Zhong Lin

    2015-01-01

    Self-powered system that is interface-free is greatly desired for area-scalable application. Here we report a self-powered electroluminescent system that consists of a triboelectric generator (TEG) and a thin-film electroluminescent (TFEL) lamp. The TEG provides high-voltage alternating electric output, which fits in well with the needs of the TFEL lamp. Induced charges pumped onto the lamp by the TEG generate an electric field that is sufficient to excite luminescence without an electrical interface circuit. Through rational serial connection of multiple TFEL lamps, effective and area-scalable luminescence is realized. It is demonstrated that multiple types of TEGs are applicable to the self-powered system, indicating that the system can make use of diverse mechanical sources and thus has potentially broad applications in illumination, display, entertainment, indication, surveillance and many others. PMID:26338365

  8. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    PubMed

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs.

  9. Optimal complexity scalable H.264/AVC video decoding scheme for portable multimedia devices

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Park, Younghyeon; Jeon, Byeungwoo

    2013-07-01

    Limited computing resources in portable multimedia devices are an obstacle in real-time video decoding of high resolution and/or high quality video contents. Ordinary H.264/AVC video decoders cannot decode video contents that exceed the limits set by their processing resources. However, in many real applications especially on portable devices, a simplified decoding with some acceptable degradation may be desirable instead of just refusing to decode such contents. For this purpose, a complexity-scalable H.264/AVC video decoding scheme is investigated in this paper. First, several simplified methods of decoding tools that have different characteristics are investigated to reduce decoding complexity and consequential degradation of reconstructed video. Then a complexity scalable H.264/AVC decoding scheme is designed by selectively combining effective simplified methods to achieve the minimum degradation. Experimental results with the H.264/AVC main profile bitstream show that its decoding complexity can be scalably controlled, and reduced by up to 44% without subjective quality loss.

  10. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce

    PubMed Central

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D.; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S.

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications. PMID:25852536

  11. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce.

    PubMed

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.

  12. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  13. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  14. Designing Scalable PGAS Communication Subsystems on Cray Gemini Interconnect

    SciTech Connect

    Vishnu, Abhinav; Daily, Jeffrey A.; Palmer, Bruce J.

    2012-12-26

    The Cray Gemini Interconnect has been recently introduced as a next generation network architecture for building multi-petaflop supercomputers. Cray XE6 systems including LANL Cielo, NERSC Hopper, ORNL Titan and proposed NCSA BlueWaters leverage the Gemini Interconnect as their primary Interconnection network. At the same time, programming models such as the Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) and Co-Array Fortran (CAF) have become available on these systems. Global Arrays is a popular PGAS model used in a variety of application domains including hydrodynamics, chemistry and visualization. Global Arrays uses Aggregate Re- mote Memory Copy Interface (ARMCI) as the communication runtime system for Remote Memory Access communication. This paper presents a design, implementation and performance evaluation of scalable and high performance communication subsystems on Cray Gemini Interconnect using ARMCI. The design space is explored and time-space complexities of commu- nication protocols for one-sided communication primitives such as contiguous and uniformly non-contiguous datatypes, atomic memory operations (AMOs) and memory synchronization is presented. An implementation of the proposed design (referred as ARMCI-Gemini) demonstrates the efficacy on communication primitives, application kernels such as LU decomposition and full applications such as Smooth Particle Hydrodynamics (SPH) application.

  15. Long-range interactions and parallel scalability in molecular simulations

    NASA Astrophysics Data System (ADS)

    Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko

    2007-01-01

    Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.

  16. Scalable quantum information processing and the optical topological quantum computer

    NASA Astrophysics Data System (ADS)

    Devitt, S.

    2010-02-01

    Optical quantum computation has represented one of the most successful testbed systems for quantum information processing. Along with ion-traps and nuclear magnetic resonance (NMR), experimentalists have demonstrated control of qubits, multi-gubit gates and small quantum algorithms. However, photonic based qubits suffer from a problematic lack of a large scale architecture for fault-tolerant computation which could conceivably be built in the near future. While optical systems are, in some regards, ideal for quantum computing due to their high mobility and low susceptibility to environmental decoherence, these same properties make the construction of compact, chip based architectures difficult. Here we discuss many of the important issues related to scalable fault-tolerant quantum computation and introduce a feasible architecture design for an optics based computer. We combine the recent development of topological cluster state computation with the photonic module, simple chip based devices which can be utilized to deterministically entangle photons. The integration of this operational unit with one of the most exciting computational models solves many of the existing problems with other optics based architectures and leads to a feasible large scale design which can continuously generate a 3D cluster state with a photonic module resource cost linear in the cross sectional size of the cluster.

  17. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process.

    PubMed

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-03-16

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs' RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user's location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area.

  18. Scalable service architecture for providing strong service guarantees

    NASA Astrophysics Data System (ADS)

    Christin, Nicolas; Liebeherr, Joerg

    2002-07-01

    For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.

  19. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  20. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175