Sample records for large-scale high performance

  1. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  2. Large-scale synthesis of high-quality hexagonal boron nitride nanosheets for large-area graphene electronics.

    PubMed

    Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo

    2012-02-08

    Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society

  3. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  4. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  5. High-Stakes Accountability: Student Anxiety and Large-Scale Testing

    ERIC Educational Resources Information Center

    von der Embse, Nathaniel P.; Witmer, Sara E.

    2014-01-01

    This study examined the relationship between student anxiety about high-stakes testing and their subsequent test performance. The FRIEDBEN Test Anxiety Scale was administered to 1,134 11th-grade students, and data were subsequently collected on their statewide assessment performance. Test anxiety was a significant predictor of test performance…

  6. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  7. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE PAGES

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    2017-01-16

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  8. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  9. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  10. Vapor and healing treatment for CH3NH3PbI3-xClx films toward large-area perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Gouda, Laxman; Gottesman, Ronen; Tirosh, Shay; Haltzi, Eynav; Hu, Jiangang; Ginsburg, Adam; Keller, David A.; Bouhadana, Yaniv; Zaban, Arie

    2016-03-01

    Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method.Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08658b

  11. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  12. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    NASA Astrophysics Data System (ADS)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  13. Wafer scale fabrication of carbon nanotube thin film transistors with high yield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Boyuan; Liang, Xuelei, E-mail: liangxl@pku.edu.cn, E-mail: ssxie@iphy.ac.cn; Yan, Qiuping

    Carbon nanotube thin film transistors (CNT-TFTs) are promising candidates for future high performance and low cost macro-electronics. However, most of the reported CNT-TFTs are fabricated in small quantities on a relatively small size substrate. The yield of large scale fabrication and the performance uniformity of devices on large size substrates should be improved before the CNT-TFTs reach real products. In this paper, 25 200 devices, with various geometries (channel width and channel length), were fabricated on 4-in. size ridged and flexible substrates. Almost 100% device yield were obtained on a rigid substrate with high out-put current (>8 μA/μm), high on/off current ratiomore » (>10{sup 5}), and high mobility (>30 cm{sup 2}/V·s). More importantly, uniform performance in 4-in. area was achieved, and the fabrication process can be scaled up. The results give us more confidence for the real application of the CNT-TFT technology in the near future.« less

  14. A Large-Scale Inquiry-Based Astronomy Intervention Project: Impact on Students' Content Knowledge Performance and Views of Their High School Science Classroom

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena; Deehan, James

    2016-01-01

    In this paper, we present the results from a study of the impact on students involved in a large-scale inquiry-based astronomical high school education intervention in Australia. Students in this intervention were led through an educational design allowing them to undertake an investigative approach to understanding the lifecycle of stars more…

  15. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  16. A Short History of Performance Assessment: Lessons Learned.

    ERIC Educational Resources Information Center

    Madaus, George F.; O'Dwyer, Laura M.

    1999-01-01

    Places performance assessment in the context of high-stakes uses, describes underlying technologies, and outlines the history of performance testing from 210 B.C.E. to the present. Historical issues of fairness, efficiency, cost, and infrastructure influence contemporary efforts to use performance assessments in large-scale, high-stakes testing…

  17. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  18. Large-scale self-assembled zirconium phosphate smectic layers via a simple spray-coating process

    NASA Astrophysics Data System (ADS)

    Wong, Minhao; Ishige, Ryohei; White, Kevin L.; Li, Peng; Kim, Daehak; Krishnamoorti, Ramanan; Gunther, Robert; Higuchi, Takeshi; Jinnai, Hiroshi; Takahara, Atsushi; Nishimura, Riichi; Sue, Hung-Jue

    2014-04-01

    The large-scale assembly of asymmetric colloidal particles is used in creating high-performance fibres. A similar concept is extended to the manufacturing of thin films of self-assembled two-dimensional crystal-type materials with enhanced and tunable properties. Here we present a spray-coating method to manufacture thin, flexible and transparent epoxy films containing zirconium phosphate nanoplatelets self-assembled into a lamellar arrangement aligned parallel to the substrate. The self-assembled mesophase of zirconium phosphate nanoplatelets is stabilized by epoxy pre-polymer and exhibits rheology favourable towards large-scale manufacturing. The thermally cured film forms a mechanically robust coating and shows excellent gas barrier properties at both low- and high humidity levels as a result of the highly aligned and overlapping arrangement of nanoplatelets. This work shows that the large-scale ordering of high aspect ratio nanoplatelets is easier to achieve than previously thought and may have implications in the technological applications for similar materials.

  19. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  20. Building and measuring a high performance network architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less

  1. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  2. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  3. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  4. Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs

    NASA Astrophysics Data System (ADS)

    Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.

    2015-03-01

    This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.

  5. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  6. Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.

    2012-03-01

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.

  7. A study on the required performance of a 2G HTS wire for HTS wind power generators

    NASA Astrophysics Data System (ADS)

    Sung, Hae-Jin; Park, Minwon; Go, Byeong-Soo; Yu, In-Keun

    2016-05-01

    YBCO or REBCO coated conductor (2G) materials are developed for their superior performance at high magnetic field and temperature. Power system applications based on high temperature superconducting (HTS) 2G wire technology are attracting attention, including large-scale wind power generators. In particular, to solve problems associated with the foundations and mechanical structure of offshore wind turbines, due to the large diameter and heavy weight of the generator, an HTS generator is suggested as one of the key technologies. Many researchers have tried to develop feasible large-scale HTS wind power generator technologies. In this paper, a study on the required performance of a 2G HTS wire for large-scale wind power generators is discussed. A 12 MW class large-scale wind turbine and an HTS generator are designed using 2G HTS wire. The total length of the 2G HTS wire for the 12 MW HTS generator is estimated, and the essential prerequisites of the 2G HTS wire based generator are described. The magnetic field distributions of a pole module are illustrated, and the mechanical stress and strain of the pole module are analysed. Finally, a reasonable price for 2G HTS wire for commercialization of the HTS generator is suggested, reflecting the results of electromagnetic and mechanical analyses of the generator.

  8. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  9. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  10. Van der Waals epitaxial growth and optoelectronics of large-scale WSe2/SnS2 vertical bilayer p-n junctions.

    PubMed

    Yang, Tiefeng; Zheng, Biyuan; Wang, Zhen; Xu, Tao; Pan, Chen; Zou, Juan; Zhang, Xuehong; Qi, Zhaoyang; Liu, Hongjun; Feng, Yexin; Hu, Weida; Miao, Feng; Sun, Litao; Duan, Xiangfeng; Pan, Anlian

    2017-12-04

    High-quality two-dimensional atomic layered p-n heterostructures are essential for high-performance integrated optoelectronics. The studies to date have been largely limited to exfoliated and restacked flakes, and the controlled growth of such heterostructures remains a significant challenge. Here we report the direct van der Waals epitaxial growth of large-scale WSe 2 /SnS 2 vertical bilayer p-n junctions on SiO 2 /Si substrates, with the lateral sizes reaching up to millimeter scale. Multi-electrode field-effect transistors have been integrated on a single heterostructure bilayer. Electrical transport measurements indicate that the field-effect transistors of the junction show an ultra-low off-state leakage current of 10 -14 A and a highest on-off ratio of up to 10 7 . Optoelectronic characterizations show prominent photoresponse, with a fast response time of 500 μs, faster than all the directly grown vertical 2D heterostructures. The direct growth of high-quality van der Waals junctions marks an important step toward high-performance integrated optoelectronic devices and systems.

  11. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  12. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  13. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  14. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  15. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  16. Similarity spectra analysis of high-performance jet aircraft noise.

    PubMed

    Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M

    2013-04-01

    Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.

  17. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  18. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  19. Fabrication and performance analysis of 4-sq cm indium tin oxide/InP photovoltaic solar cells

    NASA Technical Reports Server (NTRS)

    Gessert, T. A.; Li, X.; Phelps, P. W.; Coutts, T. J.; Tzafaras, N.

    1991-01-01

    Large-area photovoltaic solar cells based on direct current magnetron sputter deposition of indium tin oxide (ITO) into single-crystal p-InP substrates demonstrated both the radiation hardness and high performance necessary for extraterrestrial applications. A small-scale production project was initiated in which approximately 50 ITO/InP cells are being produced. The procedures used in this small-scale production of 4-sq cm ITO/InP cells are presented and discussed. The discussion includes analyses of performance range of all available production cells, and device performance data of the best cells thus far produced. Additionally, processing experience gained from the production of these cells is discussed, indicating other issues that may be encountered when large-scale productions are begun.

  20. Heavy hydrocarbon main injector technology

    NASA Technical Reports Server (NTRS)

    Fisher, S. C.; Arbit, H. A.

    1988-01-01

    One of the key components of the Advanced Launch System (ALS) is a large liquid rocket, booster engine. To keep the overall vehicle size and cost down, this engine will probably use liquid oxygen (LOX) and a heavy hydrocarbon, such as RP-1, as propellants and operate at relatively high chamber pressures to increase overall performance. A technology program (Heavy Hydrocarbon Main Injector Technology) is being studied. The main objective of this effort is to develop a logic plan and supporting experimental data base to reduce the risk of developing a large scale (approximately 750,000 lb thrust), high performance main injector system. The overall approach and program plan, from initial analyses to large scale, two dimensional combustor design and test, and the current status of the program are discussed. Progress includes performance and stability analyses, cold flow tests of injector model, design and fabrication of subscale injectors and calorimeter combustors for performance, heat transfer, and dynamic stability tests, and preparation of hot fire test plans. Related, current, high pressure, LOX/RP-1 injector technology efforts are also briefly discussed.

  1. Large voltage modulation in superconducting quantum interference devices with submicron-scale step-edge junctions

    NASA Astrophysics Data System (ADS)

    Lam, Simon K. H.

    2017-09-01

    A promising direction to improve the sensitivity of a SQUID is to increase its junction's normal resistance value, Rn, as the SQUID modulation voltage scales linearly with Rn. As a first step to develop highly sensitive single layer SQUID, submicron scale YBCO grain boundary step edge junctions and SQUIDs with large Rn were fabricated and studied. The step-edge junctions were reduced to submicron scale to increase their Rn values using focus ion beam, FIB and the measurement of transport properties were performed from 4.3 to 77 K. The FIB induced deposition layer proves to be effective to minimize the Ga ion contamination during the FIB milling process. The critical current-normal resistance value of submicron junction at 4.3 K was found to be 1-3 mV, comparable to the value of the same type of junction in micron scale. The submicron junction Rn value is in the range of 35-100 Ω, resulting a large SQUID modulation voltage in a wide temperature range. This performance promotes further investigation of cryogen-free, high field sensitivity SQUID applications at medium low temperature, e.g. at 40-60 K.

  2. Experimental feasibility study of the application of magnetic suspension techniques to large-scale aerodynamic test facilities

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1974-01-01

    Based on the premises that (1) magnetic suspension techniques can play a useful role in large-scale aerodynamic testing and (2) superconductor technology offers the only practical hope for building large-scale magnetic suspensions, an all-superconductor three-component magnetic suspension and balance facility was built as a prototype and was tested successfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities have been made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.

  3. Experimental feasibility study of the application of magnetic suspension techniques to large-scale aerodynamic test facilities. [cryogenic traonics wind tunnel

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    Based on the premises that magnetic suspension techniques can play a useful role in large scale aerodynamic testing, and that superconductor technology offers the only practical hope for building large scale magnetic suspensions, an all-superconductor 3-component magnetic suspension and balance facility was built as a prototype and tested sucessfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities at Langley Research Center were made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.

  4. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  5. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  6. Large Scale Laser Crystallization of Solution-based Alumina-doped Zinc Oxide (AZO) Nanoinks for Highly Transparent Conductive Electrode

    PubMed Central

    Nian, Qiong; Callahan, Michael; Saei, Mojib; Look, David; Efstathiadis, Harry; Bailey, John; Cheng, Gary J.

    2015-01-01

    A new method combining aqueous solution printing with UV Laser crystallization (UVLC) and post annealing is developed to deposit highly transparent and conductive Aluminum doped Zinc Oxide (AZO) films. This technique is able to rapidly produce large area AZO films with better structural and optoelectronic properties than most high vacuum deposition, suggesting a potential large-scale manufacturing technique. The optoelectronic performance improvement attributes to UVLC and forming gas annealing (FMG) induced grain boundary density decrease and electron traps passivation at grain boundaries. The physical model and computational simulation developed in this work could be applied to thermal treatment of many other metal oxide films. PMID:26515670

  7. Newly invented biobased materials from low-carbon, diverted waste fibers: research methods, testing, and full-scale application in a case study structure

    Treesearch

    Julee A Herdt; John Hunt; Kellen Schauermann

    2016-01-01

    This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authors’ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...

  8. Partially Filled Aperture Interferometric Telescopes: Achieving Large Aperture and Coronagraphic Performance

    NASA Astrophysics Data System (ADS)

    Moretto, G.; Kuhn, J.; Langlois, M.; Berdugyna, S.; Tallon, M.

    2017-09-01

    Telescopes larger than currently planned 30-m class instruments must break the mass-aperture scaling relationship of the Keck-generation of multi-segmented telescopes. Partially filled aperture, but highly redundant baseline interferometric instruments may achieve both large aperture and high dynamic range. The PLANETS FOUNDATION group has explored hybrid telescope-interferometer concepts for narrow-field optical systems that exhibit coronagraphic performance over narrow fields-of-view. This paper describes how the Colossus and Exo-Life Finder telescope designs achieve 10x lower moving masses than current Extremely Large Telescopes.

  9. Improved uniformity in high-performance organic photovoltaics enabled by (3-aminopropyl)triethoxysilane cathode functionalization.

    PubMed

    Luck, Kyle A; Shastry, Tejas A; Loser, Stephen; Ogien, Gabriel; Marks, Tobin J; Hersam, Mark C

    2013-12-28

    Organic photovoltaics have the potential to serve as lightweight, low-cost, mechanically flexible solar cells. However, losses in efficiency as laboratory cells are scaled up to the module level have to date impeded large scale deployment. Here, we report that a 3-aminopropyltriethoxysilane (APTES) cathode interfacial treatment significantly enhances performance reproducibility in inverted high-efficiency PTB7:PC71BM organic photovoltaic cells, as demonstrated by the fabrication of 100 APTES-treated devices versus 100 untreated controls. The APTES-treated devices achieve a power conversion efficiency of 8.08 ± 0.12% with histogram skewness of -0.291, whereas the untreated controls achieve 7.80 ± 0.26% with histogram skewness of -1.86. By substantially suppressing the interfacial origins of underperforming cells, the APTES treatment offers a pathway for fabricating large-area modules with high spatial performance uniformity.

  10. Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian

    2016-01-01

    The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.

  11. Edge-localized mode avoidance and pedestal structure in I-mode plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walk, J. R., E-mail: jrwalk@psfc.mit.edu; Hughes, J. W.; Hubbard, A. E.

    I-mode is a high-performance tokamak regime characterized by the formation of a temperature pedestal and enhanced energy confinement, without an accompanying density pedestal or drop in particle and impurity transport. I-mode operation appears to have naturally occurring suppression of large Edge-Localized Modes (ELMs) in addition to its highly favorable scalings of pedestal structure and overall performance. Extensive study of the ELMy H-mode has led to the development of the EPED model, which utilizes calculations of coupled peeling-ballooning MHD modes and kinetic-ballooning mode (KBM) stability limits to predict the pedestal structure preceding an ELM crash. We apply similar tools to themore » structure and ELM stability of I-mode pedestals. Analysis of I-mode discharges prepared with high-resolution pedestal data from the most recent C-Mod campaign reveals favorable pedestal scalings for extrapolation to large machines—pedestal temperature scales strongly with power per particle P{sub net}/n{sup ¯}{sub e}, and likewise pedestal pressure scales as the net heating power (consistent with weak degradation of confinement with heating power). Matched discharges in current, field, and shaping demonstrate the decoupling of energy and particle transport in I-mode, increasing fueling to span nearly a factor of two in density while maintaining matched temperature pedestals with consistent levels of P{sub net}/n{sup ¯}{sub e}. This is consistent with targets for increased performance in I-mode, elevating pedestal β{sub p} and global performance with matched increases in density and heating power. MHD calculations using the ELITE code indicate that I-mode pedestals are strongly stable to edge peeling-ballooning instabilities. Likewise, numerical modeling of the KBM turbulence onset, as well as scalings of the pedestal width with poloidal beta, indicates that I-mode pedestals are not limited by KBM turbulence—both features identified with the trigger for large ELMs, consistent with the observed suppression of large ELMs in I-mode.« less

  12. Edge-localized mode avoidance and pedestal structure in I-mode plasmasa)

    NASA Astrophysics Data System (ADS)

    Walk, J. R.; Hughes, J. W.; Hubbard, A. E.; Terry, J. L.; Whyte, D. G.; White, A. E.; Baek, S. G.; Reinke, M. L.; Theiler, C.; Churchill, R. M.; Rice, J. E.; Snyder, P. B.; Osborne, T.; Dominguez, A.; Cziegler, I.

    2014-05-01

    I-mode is a high-performance tokamak regime characterized by the formation of a temperature pedestal and enhanced energy confinement, without an accompanying density pedestal or drop in particle and impurity transport. I-mode operation appears to have naturally occurring suppression of large Edge-Localized Modes (ELMs) in addition to its highly favorable scalings of pedestal structure and overall performance. Extensive study of the ELMy H-mode has led to the development of the EPED model, which utilizes calculations of coupled peeling-ballooning MHD modes and kinetic-ballooning mode (KBM) stability limits to predict the pedestal structure preceding an ELM crash. We apply similar tools to the structure and ELM stability of I-mode pedestals. Analysis of I-mode discharges prepared with high-resolution pedestal data from the most recent C-Mod campaign reveals favorable pedestal scalings for extrapolation to large machines—pedestal temperature scales strongly with power per particle Pnet/n ¯e, and likewise pedestal pressure scales as the net heating power (consistent with weak degradation of confinement with heating power). Matched discharges in current, field, and shaping demonstrate the decoupling of energy and particle transport in I-mode, increasing fueling to span nearly a factor of two in density while maintaining matched temperature pedestals with consistent levels of Pnet/n ¯e. This is consistent with targets for increased performance in I-mode, elevating pedestal βp and global performance with matched increases in density and heating power. MHD calculations using the ELITE code indicate that I-mode pedestals are strongly stable to edge peeling-ballooning instabilities. Likewise, numerical modeling of the KBM turbulence onset, as well as scalings of the pedestal width with poloidal beta, indicates that I-mode pedestals are not limited by KBM turbulence—both features identified with the trigger for large ELMs, consistent with the observed suppression of large ELMs in I-mode.

  13. Los Alamos Explosives Performance Key to Stockpile Stewardship

    ScienceCinema

    Dattelbaum, Dana

    2018-02-14

    As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- and small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.

  14. Outlook and Challenges of Perovskite Solar Cells toward Terawatt-Scale Photovoltaic Module Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Kai; Kim, Donghoe; Whitaker, James B

    Rapid development of perovskite solar cells (PSCs) during the past several years has made this photovoltaic (PV) technology a serious contender for potential large-scale deployment on the terawatt scale in the PV market. To successfully transition PSC technology from the laboratory to industry scale, substantial efforts need to focus on scalable fabrication of high-performance perovskite modules with minimum negative environmental impact. Here, we provide an overview of the current research and our perspective regarding PSC technology toward future large-scale manufacturing and deployment. Several key challenges discussed are (1) a scalable process for large-area perovskite module fabrication; (2) less hazardous chemicalmore » routes for PSC fabrication; and (3) suitable perovskite module designs for different applications.« less

  15. High-Performance Computing Unlocks Innovation at NREL - Video Text Version

    Science.gov Websites

    scales, data visualizations and large-scale modeling provide insights and test new ideas. But this type most energy-efficient data center in the world. NREL and Hewlett-Packard won an R&D 100 award-the

  16. Solution-Processable High-Purity Semiconducting SWCNTs for Large-Area Fabrication of High-Performance Thin-Film Transistors.

    PubMed

    Gu, Jianting; Han, Jie; Liu, Dan; Yu, Xiaoqin; Kang, Lixing; Qiu, Song; Jin, Hehua; Li, Hongbo; Li, Qingwen; Zhang, Jin

    2016-09-01

    For the large-area fabrication of thin-film transistors (TFTs), a new conjugated polymer poly[9-(1-octylonoyl)-9H-carbazole-2,7-diyl] is developed to harvest ultrahigh-purity semiconducting single-walled carbon nanotubes. Combined with spectral and nanodevice characterization, the purity is estimated up to 99.9%. High density and uniform network formed by dip-coating process is liable to fabricate high-performance TFTs on a wafer-scale and the as-fabricated TFTs exhibit a high degree of uniformity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Large-Scale, Three–Dimensional, Free–Standing, and Mesoporous Metal Oxide Networks for High–Performance Photocatalysis

    PubMed Central

    Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng

    2013-01-01

    Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595

  18. Carbon and Carbon Hybrid Materials as Anodes for Sodium-Ion Batteries.

    PubMed

    Zhong, Xiongwu; Wu, Ying; Zeng, Sifan; Yu, Yan

    2018-02-12

    Sodium-ion batteries (SIBs) have attracted much attention for application in large-scale grid energy storage owing to the abundance and low cost of sodium sources. However, low energy density and poor cycling life hinder practical application of SIBs. Recently, substantial efforts have been made to develop electrode materials to push forward large-scale practical applications. Carbon materials can be directly used as anode materials, and they show excellent sodium storage performance. Additionally, designing and constructing carbon hybrid materials is an effective strategy to obtain high-performance anodes for SIBs. In this review, we summarize recent research progress on carbon and carbon hybrid materials as anodes for SIBs. Nanostructural design to enhance the sodium storage performance of anode materials is discussed, and we offer some insight into the potential directions of and future high-performance anode materials for SIBs. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  20. Research on precision grinding technology of large scale and ultra thin optics

    NASA Astrophysics Data System (ADS)

    Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua

    2018-03-01

    The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.

  1. High-Throughput Microbore UPLC-MS Metabolic Phenotyping of Urine for Large-Scale Epidemiology Studies.

    PubMed

    Gray, Nicola; Lewis, Matthew R; Plumb, Robert S; Wilson, Ian D; Nicholson, Jeremy K

    2015-06-05

    A new generation of metabolic phenotyping centers are being created to meet the increasing demands of personalized healthcare, and this has resulted in a major requirement for economical, high-throughput metabonomic analysis by liquid chromatography-mass spectrometry (LC-MS). Meeting these new demands represents an emerging bioanalytical problem that must be solved if metabolic phenotyping is to be successfully applied to large clinical and epidemiological sample sets. Ultraperformance (UP)LC-MS-based metabolic phenotyping, based on 2.1 mm i.d. LC columns, enables comprehensive metabolic phenotyping but, when employed for the analysis of thousands of samples, results in high solvent usage. The use of UPLC-MS employing 1 mm i.d. columns for metabolic phenotyping rather than the conventional 2.1 mm i.d. methodology shows that the resulting optimized microbore method provided equivalent or superior performance in terms of peak capacity, sensitivity, and robustness. On average, we also observed, when using the microbore scale separation, an increase in response of 2-3 fold over that obtained with the standard 2.1 mm scale method. When applied to the analysis of human urine, the 1 mm scale method showed no decline in performance over the course of 1000 analyses, illustrating that microbore UPLC-MS represents a viable alternative to conventional 2.1 mm i.d. formats for routine large-scale metabolic profiling studies while also resulting in a 75% reduction in solvent usage. The modest increase in sensitivity provided by this methodology also offers the potential to either reduce sample consumption or increase the number of metabolite features detected with confidence due to the increased signal-to-noise ratios obtained. Implementation of this miniaturized UPLC-MS method of metabolic phenotyping results in clear analytical, economic, and environmental benefits for large-scale metabolic profiling studies with similar or improved analytical performance compared to conventional UPLC-MS.

  2. High Performance Semantic Factoring of Giga-Scale Semantic Graph Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Adolf, Robert D.; Al-Saffar, Sinan

    2010-10-04

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors.« less

  3. LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis

    DTIC Science & Technology

    2016-05-23

    LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High

  4. Reading Fluency as a Predictor of Reading Proficiency in Low-Performing, High-Poverty Schools

    ERIC Educational Resources Information Center

    Baker, Scott K.; Smolkowski, Keith; Katz, Rachell; Fien, Hank; Seeley, John R.; Kame'enui, Edward J.; Beck, Carrie Thomas

    2008-01-01

    The purpose of this study was to examine oral reading fluency (ORF) in the context of a large-scale federal reading initiative conducted in low performing, high poverty schools. The objectives were to (a) investigate the relation between ORF and comprehensive reading tests, (b) examine whether slope of performance over time on ORF predicted…

  5. Study of LANDSAT-D thematic mapper performance as applied to hydrocarbon exploration

    NASA Technical Reports Server (NTRS)

    Everett, J. R. (Principal Investigator)

    1983-01-01

    Two fully processed test tapes were enhanced and evaluated at scales up to 1:10,000, using both hardcopy output and interactive screen display. A large scale, the Detroit, Michigan scene shows evidence of an along line data slip every sixteenth line in TM channel 2. Very large scale products generated in false color using channels 1,3, and 4 should be very acceptable for interpretation at scales up to 1:50,000 and useful for change mapping probably up to scale 1:24,000. Striping visible in water bodies for both natural and color products indicates that the detector calibration is probably performing below preflight specification. For a set of 512 x 512 windows within the NE Arkansas scene, the variance-covariance matrices were computed and principal component analyses performed. Initial analysis suggests that the shortwave infrared TM 5 and 6 channels are a highly significant data source. The thermal channel (TM 7) shows negative correlation with TM 1 and 4.

  6. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  7. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  8. High School Closures in New York City: Impacts on Students' Academic Outcomes, Attendance, and Mobility. Brief

    ERIC Educational Resources Information Center

    Kemple, James J.

    2015-01-01

    In the first decade of the 21st century, the New York City (NYC) Department of Education implemented a set of large-scale and much debated high school reforms, which included closing large, low-performing schools, opening new small schools, and extending high school choice to students throughout the district. The school closure process was the…

  9. High School Closures in New York City: Impacts on Students' Academic Outcomes, Attendance, and Mobility. Technical Appendices

    ERIC Educational Resources Information Center

    Kemple, James J.

    2015-01-01

    In the first decade of the 21st century, the New York City (NYC) Department of Education implemented a set of large-scale and much debated high school reforms, which included closing large, low-performing schools, opening new small schools, and extending high school choice to students throughout the district. The school closure process was the…

  10. High School Closures in New York City: Impacts on Students' Academic Outcomes, Attendance, and Mobility. Report

    ERIC Educational Resources Information Center

    Kemple, James J.

    2015-01-01

    In the first decade of the 21st century, the New York City (NYC) Department of Education implemented a set of large-scale and much debated high school reforms, which included closing large, low-performing schools, opening new small schools, and extending high school choice to students throughout the district. The school closure process was the…

  11. Supermassive Black Hole Binaries in High Performance Massively Parallel Direct N-body Simulations on Large GPU Clusters

    NASA Astrophysics Data System (ADS)

    Spurzem, R.; Berczik, P.; Zhong, S.; Nitadori, K.; Hamada, T.; Berentzen, I.; Veles, A.

    2012-07-01

    Astrophysical Computer Simulations of Dense Star Clusters in Galactic Nuclei with Supermassive Black Holes are presented using new cost-efficient supercomputers in China accelerated by graphical processing cards (GPU). We use large high-accuracy direct N-body simulations with Hermite scheme and block-time steps, parallelised across a large number of nodes on the large scale and across many GPU thread processors on each node on the small scale. A sustained performance of more than 350 Tflop/s for a science run on using simultaneously 1600 Fermi C2050 GPUs is reached; a detailed performance model is presented and studies for the largest GPU clusters in China with up to Petaflop/s performance and 7000 Fermi GPU cards. In our case study we look at two supermassive black holes with equal and unequal masses embedded in a dense stellar cluster in a galactic nucleus. The hardening processes due to interactions between black holes and stars, effects of rotation in the stellar system and relativistic forces between the black holes are simultaneously taken into account. The simulation stops at the complete relativistic merger of the black holes.

  12. PetIGA: A framework for high-performance isogeometric analysis

    DOE PAGES

    Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less

  13. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  15. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  16. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  17. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  18. Los Alamos Explosives Performance Key to Stockpile Stewardship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dattelbaum, Dana

    2014-11-03

    As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- andmore » small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.« less

  19. Stereotype Threat, Inquiring about Test Takers' Race and Gender, and Performance on Low-Stakes Tests in a Large-Scale Assessment. Research Report. ETS RR-15-02

    ERIC Educational Resources Information Center

    Stricker, Lawrence J.; Rock, Donald A.; Bridgeman, Brent

    2015-01-01

    This study explores stereotype threat on low-stakes tests used in a large-scale assessment, math and reading tests in the Education Longitudinal Study of 2002 (ELS). Issues identified in laboratory research (though not observed in studies of high-stakes tests) were assessed: whether inquiring about their race and gender is related to the…

  20. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  1. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  2. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    PubMed Central

    Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin

    2018-01-01

    Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405

  3. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE PAGES

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...

    2018-01-05

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  4. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  5. Performance Status and Change--Measuring Education System Effectiveness with Data from PISA 2000-2009

    ERIC Educational Resources Information Center

    Lenkeit, Jenny; Caro, Daniel H.

    2014-01-01

    Reports of international large-scale assessments tend to evaluate and compare education system performance based on absolute scores. And policymakers refer to high-performing and economically prosperous education systems to enhance their own systemic features. But socioeconomic differences between systems compromise the plausibility of those…

  6. Automated AFM for small-scale and large-scale surface profiling in CMP applications

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2018-03-01

    As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.

  7. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  8. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  9. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  10. High performance semantic factoring of giga-scale semantic graph databases.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    al-Saffar, Sinan; Adolf, Bob; Haglin, David

    2010-10-01

    As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less

  11. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  12. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. A novel iron-lead redox flow battery for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Ren, Y. X.

    2017-04-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies for the massive utilization of intermittent renewables especially wind and solar energy. This work presents a novel redox flow battery that utilizes inexpensive and abundant Fe(II)/Fe(III) and Pb/Pb(II) redox couples as redox materials. Experimental results show that both the Fe(II)/Fe(III) and Pb/Pb(II) redox couples have fast electrochemical kinetics in methanesulfonic acid, and that the coulombic efficiency and energy efficiency of the battery are, respectively, as high as 96.2% and 86.2% at 40 mA cm-2. Furthermore, the battery exhibits stable performance in terms of efficiencies and discharge capacities during the cycle test. The inexpensive redox materials, fast electrochemical kinetics and stable cycle performance make the present battery a promising candidate for large-scale energy storage applications.

  14. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  15. Large-scale coherent structures of suspended dust concentration in the neutral atmospheric surface layer: A large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Zhang, Yangyue; Hu, Ruifeng; Zheng, Xiaojing

    2018-04-01

    Dust particles can remain suspended in the atmospheric boundary layer, motions of which are primarily determined by turbulent diffusion and gravitational settling. Little is known about the spatial organizations of suspended dust concentration and how turbulent coherent motions contribute to the vertical transport of dust particles. Numerous studies in recent years have revealed that large- and very-large-scale motions in the logarithmic region of laboratory-scale turbulent boundary layers also exist in the high Reynolds number atmospheric boundary layer, but their influence on dust transport is still unclear. In this study, numerical simulations of dust transport in a neutral atmospheric boundary layer based on an Eulerian modeling approach and large-eddy simulation technique are performed to investigate the coherent structures of dust concentration. The instantaneous fields confirm the existence of very long meandering streaks of dust concentration, with alternating high- and low-concentration regions. A strong negative correlation between the streamwise velocity and concentration and a mild positive correlation between the vertical velocity and concentration are observed. The spatial length scales and inclination angles of concentration structures are determined, compared with their flow counterparts. The conditionally averaged fields vividly depict that high- and low-concentration events are accompanied by a pair of counter-rotating quasi-streamwise vortices, with a downwash inside the low-concentration region and an upwash inside the high-concentration region. Through the quadrant analysis, it is indicated that the vertical dust transport is closely related to the large-scale roll modes, and ejections in high-concentration regions are the major mechanisms for the upward motions of dust particles.

  16. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    PubMed

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  17. Large-scale broadband absorber based on metallic tungsten nanocone structure

    NASA Astrophysics Data System (ADS)

    Wang, Jiaxing; Liang, Yuzhang; Huo, Pengcheng; Wang, Daopeng; Tan, Jun; Xu, Ting

    2017-12-01

    We report a broadband tungsten absorber based on a nanocone metallic resonant structure fabricated by self-assembly nanosphere lithography. In experimental demonstration, the fabricated absorber has more than 90% average absorption efficiency and shows superior angular tolerance in the entire visible and near-infrared spectral region. We envision that this large-scale nanostructured broadband optical absorber would find great potential in the applications of high performance optoelectronic platforms and solar-thermal energy harvesting systems.

  18. Cedar-a large scale multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.; Kuck, D.; Lawrie, D.

    1983-01-01

    This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.

  19. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  20. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  1. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  2. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE PAGES

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    2017-01-07

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  3. HRLSim: a high performance spiking neural network simulator for GPGPU clusters.

    PubMed

    Minkovich, Kirill; Thibeault, Corey M; O'Brien, Michael John; Nogin, Aleksey; Cho, Youngkwan; Srinivasa, Narayan

    2014-02-01

    Modeling of large-scale spiking neural models is an important tool in the quest to understand brain function and subsequently create real-world applications. This paper describes a spiking neural network simulator environment called HRL Spiking Simulator (HRLSim). This simulator is suitable for implementation on a cluster of general purpose graphical processing units (GPGPUs). Novel aspects of HRLSim are described and an analysis of its performance is provided for various configurations of the cluster. With the advent of inexpensive GPGPU cards and compute power, HRLSim offers an affordable and scalable tool for design, real-time simulation, and analysis of large-scale spiking neural networks.

  4. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  5. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  6. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallarno, George; Rogers, James H; Maxwell, Don E

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less

  8. Turbulent kinetics of a large wind farm and their impact in the neutral boundary layer

    DOE PAGES

    Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo; ...

    2015-12-28

    High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less

  9. High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser

    NASA Technical Reports Server (NTRS)

    Barna, P. S.

    1975-01-01

    Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.

  10. The development of a capability for aerodynamic testing of large-scale wing sections in a simulated natural rain environment

    NASA Technical Reports Server (NTRS)

    Bezos, Gaudy M.; Cambell, Bryan A.; Melson, W. Edward

    1989-01-01

    A research technique to obtain large-scale aerodynamic data in a simulated natural rain environment has been developed. A 10-ft chord NACA 64-210 wing section wing section equipped with leading-edge and trailing-edge high-lift devices was tested as part of a program to determine the effect of highly-concentrated, short-duration rainfall on airplane performance. Preliminary dry aerodynamic data are presented for the high-lift configuration at a velocity of 100 knots and an angle of attack of 18 deg. Also, data are presented on rainfield uniformity and rainfall concentration intensity levels obtained during the calibration of the rain simulation system.

  11. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  12. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.

  13. A prototype automatic phase compensation module

    NASA Technical Reports Server (NTRS)

    Terry, John D.

    1992-01-01

    The growing demands for high gain and accurate satellite communication systems will necessitate the utilization of large reflector systems. One area of concern of reflector based satellite communication is large scale surface deformations due to thermal effects. These distortions, when present, can degrade the performance of the reflector system appreciable. This performance degradation is manifested by a decrease in peak gain, and increase in sidelobe level, and pointing errors. It is essential to compensate for these distortion effects and to maintain the required system performance in the operating space environment. For this reason the development of a technique to offset the degradation effects is highly desirable. Currently, most research is direct at developing better material for the reflector. These materials have a lower coefficient of linear expansion thereby reducing the surface errors. Alternatively, one can minimize the distortion effects of these large scale errors by adaptive phased array compensation. Adaptive phased array techniques have been studied extensively at NASA and elsewhere. Presented in this paper is a prototype automatic phase compensation module designed and built at NASA Lewis Research Center which is the first stage of development for an adaptive array compensation module.

  14. A study of rotor and platform design trade-offs for large-scale floating vertical axis wind turbines

    NASA Astrophysics Data System (ADS)

    Griffith, D. Todd; Paquette, Joshua; Barone, Matthew; Goupee, Andrew J.; Fowler, Matthew J.; Bull, Diana; Owens, Brian

    2016-09-01

    Vertical axis wind turbines are receiving significant attention for offshore siting. In general, offshore wind offers proximity to large populations centers, a vast & more consistent wind resource, and a scale-up opportunity, to name a few beneficial characteristics. On the other hand, offshore wind suffers from high levelized cost of energy (LCOE) and in particular high balance of system (BoS) costs owing to accessibility challenges and limited project experience. To address these challenges associated with offshore wind, Sandia National Laboratories is researching large-scale (MW class) offshore floating vertical axis wind turbines (VAWTs). The motivation for this work is that floating VAWTs are a potential transformative technology solution to reduce offshore wind LCOE in deep-water locations. This paper explores performance and cost trade-offs within the design space for floating VAWTs between the configurations for the rotor and platform.

  15. Controlled crystallization and granulation of nano-scale β-Ni(OH) 2 cathode materials for high power Ni-MH batteries

    NASA Astrophysics Data System (ADS)

    He, Xiangming; Li, Jianjun; Cheng, Hongwei; Jiang, Changyin; Wan, Chunrong

    A novel synthesis of controlled crystallization and granulation was attempted to prepare nano-scale β-Ni(OH) 2 cathode materials for high power Ni-MH batteries. Nano-scale β-Ni(OH) 2 and Co(OH) 2 with a diameter of 20 nm were prepared by controlled crystallization, mixed by ball milling, and granulated to form about 5 μm spherical grains by spray drying granulation. Both the addition of nano-scale Co(OH) 2 and granulation significantly enhanced electrochemical performance of nano-scale Ni(OH) 2. The XRD and TEM analysis shown that there were a large amount of defects among the crystal lattice of as-prepared nano-scale Ni(OH) 2, and the DTA-TG analysis shown that it had both lower decomposition temperature and higher decomposition reaction rate, indicating less thermal stability, as compared with conventional micro-scale Ni(OH) 2, and indicating that it had higher electrochemical performance. The granulated grains of nano-scale Ni(OH) 2 mixed with nano-scale Co(OH) 2 at Co/Ni = 1/20 presented the highest specific capacity reaching its theoretical value of 289 mAh g -1 at 1 C, and also exhibited much improved electrochemical performance at high discharge capacity rate up to 10 C. The granulated grains of nano-scale β-Ni(OH) 2 mixed with nano-scale Co(OH) 2 is a promising cathode active material for high power Ni-MH batteries.

  16. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  17. A global classification of coastal flood hazard climates associated with large-scale oceanographic forcing.

    PubMed

    Rueda, Ana; Vitousek, Sean; Camus, Paula; Tomás, Antonio; Espejo, Antonio; Losada, Inigo J; Barnard, Patrick L; Erikson, Li H; Ruggiero, Peter; Reguero, Borja G; Mendez, Fernando J

    2017-07-11

    Coastal communities throughout the world are exposed to numerous and increasing threats, such as coastal flooding and erosion, saltwater intrusion and wetland degradation. Here, we present the first global-scale analysis of the main drivers of coastal flooding due to large-scale oceanographic factors. Given the large dimensionality of the problem (e.g. spatiotemporal variability in flood magnitude and the relative influence of waves, tides and surge levels), we have performed a computer-based classification to identify geographical areas with homogeneous climates. Results show that 75% of coastal regions around the globe have the potential for very large flooding events with low probabilities (unbounded tails), 82% are tide-dominated, and almost 49% are highly susceptible to increases in flooding frequency due to sea-level rise.

  18. Water surface assisted synthesis of large-scale carbon nanotube film for high-performance and stretchable supercapacitors.

    PubMed

    Yu, Minghao; Zhang, Yangfan; Zeng, Yinxiang; Balogun, Muhammad-Sadeeq; Mai, Kancheng; Zhang, Zishou; Lu, Xihong; Tong, Yexiang

    2014-07-16

    A kind of multiwalled carbon-nanotube (MWCNT)/polydimethylsiloxane (PDMS) film with excellent conductivity and mechanical properties is developed using a facile and large-scale water surface assisted synthesis method. The film can act as a conductive support for electrochemically active PANI nano fibers. A device based on these PANI/MWCNT/PDMS electrodes shows good and stable capacitive behavior, even under static and dynamic stretching conditions. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  20. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    PubMed

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Enhancing Performance of Large-Area Organic Solar Cells with Thick Film via Ternary Strategy.

    PubMed

    Zhang, Jianqi; Zhao, Yifan; Fang, Jin; Yuan, Liu; Xia, Benzheng; Wang, Guodong; Wang, Zaiyu; Zhang, Yajie; Ma, Wei; Yan, Wei; Su, Wenming; Wei, Zhixiang

    2017-06-01

    Large-scale fabrication of organic solar cells requires an active layer with high thickness tolerability and the use of environment-friendly solvents. Thick films with high-performance can be achieved via a ternary strategy studied herein. The ternary system consists of one polymer donor, one small molecule donor, and one fullerene acceptor. The small molecule enhances the crystallinity and face-on orientation of the active layer, leading to improved thickness tolerability compared with that of a polymer-fullerene binary system. An active layer with 270 nm thickness exhibits an average power conversion efficiency (PCE) of 10.78%, while the PCE is less than 8% with such thick film for binary system. Furthermore, large-area devices are successfully fabricated using polyethylene terephthalate (PET)/Silver gride or indium tin oxide (ITO)-based transparent flexible substrates. The product shows a high PCE of 8.28% with an area of 1.25 cm 2 for a single cell and 5.18% for a 20 cm 2 module. This study demonstrates that ternary organic solar cells exhibit great potential for large-scale fabrication and future applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  3. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  4. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  5. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  6. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  7. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; hide

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  8. Large-Scale Simulation of Multi-Asset Ising Financial Markets

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-03-01

    We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.

  9. Translational bioinformatics in the cloud: an affordable alternative

    PubMed Central

    2010-01-01

    With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073

  10. Keeping on Track: Performance Profiles of Low Performers in Academic Educational Tracks

    ERIC Educational Resources Information Center

    Reed, Helen C.; van Wesel, Floryt; Ouwehand, Carolijn; Jolles, Jelle

    2015-01-01

    In countries with high differentiation between academic and vocational education, an individual's future prospects are strongly determined by the educational track to which he or she is assigned. This large-scale, cross-sectional study focuses on low-performing students in academic tracks who face being moved to a vocational track. If more is…

  11. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  12. Performance of Aqueous Film Forming Foam (AFFF) on Large-Scale Hydroprocessed Renewable Jet (HRJ) Fuel Fires

    DTIC Science & Technology

    2011-12-01

    aqueous film forming foam ( AFFF ) firefighting agents and equipment are capable of...AFRL-RX-TY-TR-2012-0012 PERFORMANCE OF AQUEOUS FILM FORMING FOAM ( AFFF ) ON LARGE-SCALE HYDROPROCESSED RENEWABLE JET (HRJ) FUEL FIRES...Performance of Aqueous Film Forming Foam ( AFFF ) on Large-Scale Hydroprocessed Renewable Jet (HRJ) Fuel Fires FA4819-09-C-0030 0602102F 4915 D0

  13. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  14. Guided growth of large-scale, horizontally aligned arrays of single-walled carbon nanotubes and their use in thin-film transistors.

    PubMed

    Kocabas, Coskun; Hur, Seung-Hyun; Gaur, Anshu; Meitl, Matthew A; Shim, Moonsub; Rogers, John A

    2005-11-01

    A convenient process for generating large-scale, horizontally aligned arrays of pristine, single-walled carbon nanotubes (SWNTs) is described. The approach uses guided growth, by chemical vapor deposition (CVD), of SWNTs on miscut single-crystal quartz substrates. Studies of the growth reveal important relationships between the density and alignment of the tubes, the CVD conditions, and the morphology of the quartz. Electrodes and dielectrics patterned on top of these arrays yield thin-film transistors that use the SWNTs as effective thin-film semiconductors. The ability to build high-performance devices of this type suggests significant promise for large-scale aligned arrays of SWNTs in electronics, sensors, and other applications.

  15. Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System.

    PubMed

    Conway, Michael K; Gerger, Michael J; Balay, Erin E; O'Connell, Rachel; Hanson, Seth; Daily, Neil J; Wakatsuki, Tetsuro

    2015-05-14

    Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g., cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.

  16. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  17. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.

  18. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  19. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors

    NASA Astrophysics Data System (ADS)

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-10-01

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m2 g-1 present high specific capacities of the 308 and 200 F g-1 in KOH electrolyte at current densities of 0.1 and 10 A g-1, respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g-1 at 0.1 A g-1 and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry.

  20. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors.

    PubMed

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-10-16

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m(2) g(-1) present high specific capacities of the 308 and 200 F g(-1) in KOH electrolyte at current densities of 0.1 and 10 A g(-1), respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g(-1) at 0.1 A g(-1) and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry.

  1. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors

    PubMed Central

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-01-01

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m2 g−1 present high specific capacities of the 308 and 200 F g−1 in KOH electrolyte at current densities of 0.1 and 10 A g−1, respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g−1 at 0.1 A g−1 and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry. PMID:26472144

  2. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2011-08-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.

  3. Large-scale runoff generation - parsimonious parameterisation using high-resolution topography

    NASA Astrophysics Data System (ADS)

    Gong, L.; Halldin, S.; Xu, C.-Y.

    2010-09-01

    World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.

  4. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  5. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  6. Java Performance for Scientific Applications on LLNL Computer Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapfer, C; Wissink, A

    2002-05-10

    Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less

  7. High-frequency self-aligned graphene transistors with transferred gate stacks.

    PubMed

    Cheng, Rui; Bai, Jingwei; Liao, Lei; Zhou, Hailong; Chen, Yu; Liu, Lixin; Lin, Yung-Chen; Jiang, Shan; Huang, Yu; Duan, Xiangfeng

    2012-07-17

    Graphene has attracted enormous attention for radio-frequency transistor applications because of its exceptional high carrier mobility, high carrier saturation velocity, and large critical current density. Herein we report a new approach for the scalable fabrication of high-performance graphene transistors with transferred gate stacks. Specifically, arrays of gate stacks are first patterned on a sacrificial substrate, and then transferred onto arbitrary substrates with graphene on top. A self-aligned process, enabled by the unique structure of the transferred gate stacks, is then used to position precisely the source and drain electrodes with minimized access resistance or parasitic capacitance. This process has therefore enabled scalable fabrication of self-aligned graphene transistors with unprecedented performance including a record-high cutoff frequency up to 427 GHz. Our study defines a unique pathway to large-scale fabrication of high-performance graphene transistors, and holds significant potential for future application of graphene-based devices in ultra-high-frequency circuits.

  8. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  9. Modal interactions between a large-wavelength inclined interface and small-wavelength multimode perturbations in a Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    McFarland, Jacob A.; Reilly, David; Black, Wolfgang; Greenough, Jeffrey A.; Ranjan, Devesh

    2015-07-01

    The interaction of a small-wavelength multimodal perturbation with a large-wavelength inclined interface perturbation is investigated for the reshocked Richtmyer-Meshkov instability using three-dimensional simulations. The ares code, developed at Lawrence Livermore National Laboratory, was used for these simulations and a detailed comparison of simulation results and experiments performed at the Georgia Tech Shock Tube facility is presented first for code validation. Simulation results are presented for four cases that vary in large-wavelength perturbation amplitude and the presence of secondary small-wavelength multimode perturbations. Previously developed measures of mixing and turbulence quantities are presented that highlight the large variation in perturbation length scales created by the inclined interface and the multimode complex perturbation. Measures are developed for entrainment, and turbulence anisotropy that help to identify the effects of and competition between each perturbations type. It is shown through multiple measures that before reshock the flow processes a distinct memory of the initial conditions that is present in both large-scale-driven entrainment measures and small-scale-driven mixing measures. After reshock the flow develops to a turbulentlike state that retains a memory of high-amplitude but not low-amplitude large-wavelength perturbations. It is also shown that the high-amplitude large-wavelength perturbation is capable of producing small-scale mixing and turbulent features similar to the small-wavelength multimode perturbations.

  10. On large-scale dynamo action at high magnetic Reynolds number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk

    2014-07-01

    We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less

  11. Accelerating DNA analysis applications on GPU clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different nodes.« less

  12. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  13. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  14. Multi-Scale Multi-Domain Model | Transportation Research | NREL

    Science.gov Websites

    framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework

  15. Advances in DNA sequencing technologies for high resolution HLA typing.

    PubMed

    Cereb, Nezih; Kim, Hwa Ran; Ryu, Jaejun; Yang, Soo Young

    2015-12-01

    This communication describes our experience in large-scale G group-level high resolution HLA typing using three different DNA sequencing platforms - ABI 3730 xl, Illumina MiSeq and PacBio RS II. Recent advances in DNA sequencing technologies, so-called next generation sequencing (NGS), have brought breakthroughs in deciphering the genetic information in all living species at a large scale and at an affordable level. The NGS DNA indexing system allows sequencing multiple genes for large number of individuals in a single run. Our laboratory has adopted and used these technologies for HLA molecular testing services. We found that each sequencing technology has its own strengths and weaknesses, and their sequencing performances complement each other. HLA genes are highly complex and genotyping them is quite challenging. Using these three sequencing platforms, we were able to meet all requirements for G group-level high resolution and high volume HLA typing. Copyright © 2015 American Society for Histocompatibility and Immunogenetics. Published by Elsevier Inc. All rights reserved.

  16. Cross-flow turbines: progress report on physical and numerical model studies at large laboratory scale

    NASA Astrophysics Data System (ADS)

    Wosnik, Martin; Bachant, Peter

    2016-11-01

    Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.

  17. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. High dimensional biological data retrieval optimization with NoSQL technology.

    PubMed

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.

  19. High dimensional biological data retrieval optimization with NoSQL technology

    PubMed Central

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347

  20. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    NASA Astrophysics Data System (ADS)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  1. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  2. Combined climate and carbon-cycle effects of large-scale deforestation

    PubMed Central

    Bala, G.; Caldeira, K.; Wickett, M.; Phillips, T. J.; Lobell, D. B.; Delire, C.; Mirin, A.

    2007-01-01

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO2 to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate. PMID:17420463

  3. Combined climate and carbon-cycle effects of large-scale deforestation.

    PubMed

    Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A

    2007-04-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO(2) to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.

  4. Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bala, G; Caldeira, K; Wickett, M

    2006-10-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has amore » net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.« less

  5. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  6. Batteries for electric road vehicles.

    PubMed

    Goodenough, John B; Braga, M Helena

    2018-01-15

    The dependence of modern society on the energy stored in a fossil fuel is not sustainable. An immediate challenge is to eliminate the polluting gases emitted from the roads of the world by replacing road vehicles powered by the internal combustion engine with those powered by rechargeable batteries. These batteries must be safe and competitive in cost, performance, driving range between charges, and convenience. The competitive performance of an electric car has been demonstrated, but the cost of fabrication, management to ensure safety, and a short cycle life have prevented large-scale penetration of the all-electric road vehicle into the market. Low-cost, safe all-solid-state cells from which dendrite-free alkali-metal anodes can be plated are now available; they have an operating temperature range from -20 °C to 80 °C and they permit the design of novel high-capacity, high-voltage cathodes providing fast charge/discharge rates. Scale-up to large multicell batteries is feasible.

  7. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  8. Will COBE challenge the inflationary paradigm - Cosmic microwave background anisotropies versus large-scale streaming motions revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorski, K.M.

    1991-03-01

    The relation between cosmic microwave background (CMB) anisotropies and large-scale galaxy streaming motions is examined within the framework of inflationary cosmology. The minimal Sachs and Wolfe (1967) CMB anisotropies at large angular scales in the models with initial Harrison-Zel'dovich spectrum of inhomogeneity normalized to the local large-scale bulk flow, which are independent of the Hubble constant and specific nature of dark matter, are found to be within the anticipated ultimate sensitivity limits of COBE's Differential Microwave Radiometer experiment. For example, the most likely value of the quadrupole coefficient is predicted to be a2 not less than 7 x 10 tomore » the -6th, where equality applies to the limiting minimal model. If (1) COBE's DMR instruments perform well throughout the two-year period; (2) the anisotropy data are not marred by the systematic errors; (3) the large-scale motions retain their present observational status; (4) there is no statistical conspiracy in a sense of the measured bulk flow being of untypically high and the large-scale anisotropy of untypically low amplitudes; and (5) the low-order multipoles in the all-sky primordial fireball temperature map are not detected, the inflationary paradigm will have to be questioned. 19 refs.« less

  9. Absolute pitch among students at the Shanghai Conservatory of Music: a large-scale direct-test study.

    PubMed

    Deutsch, Diana; Li, Xiaonuo; Shen, Jing

    2013-11-01

    This paper reports a large-scale direct-test study of absolute pitch (AP) in students at the Shanghai Conservatory of Music. Overall note-naming scores were very high, with high scores correlating positively with early onset of musical training. Students who had begun training at age ≤5 yr scored 83% correct not allowing for semitone errors and 90% correct allowing for semitone errors. Performance levels were higher for white key pitches than for black key pitches. This effect was greater for orchestral performers than for pianists, indicating that it cannot be attributed to early training on the piano. Rather, accuracy in identifying notes of different names (C, C#, D, etc.) correlated with their frequency of occurrence in a large sample of music taken from the Western tonal repertoire. There was also an effect of pitch range, so that performance on tones in the two-octave range beginning on Middle C was higher than on tones in the octave below Middle C. In addition, semitone errors tended to be on the sharp side. The evidence also ran counter to the hypothesis, previously advanced by others, that the note A plays a special role in pitch identification judgments.

  10. Potential of dynamically harmonized Fourier transform ion cyclotron resonance cell for high-throughput metabolomics fingerprinting: control of data quality.

    PubMed

    Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle

    2018-01-01

    Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.

  11. The cosmic ray muon tomography facility based on large scale MRPC detectors

    NASA Astrophysics Data System (ADS)

    Wang, Xuewu; Zeng, Ming; Zeng, Zhi; Wang, Yi; Zhao, Ziran; Yue, Xiaoguang; Luo, Zhifei; Yi, Hengguan; Yu, Baihui; Cheng, Jianping

    2015-06-01

    Cosmic ray muon tomography is a novel technology to detect high-Z material. A prototype of TUMUTY with 73.6 cm×73.6 cm large scale position sensitive MRPC detectors has been developed and is introduced in this paper. Three test kits have been tested and image is reconstructed using MAP algorithm. The reconstruction results show that the prototype is working well and the objects with complex structure and small size (20 mm) can be imaged on it, while the high-Z material is distinguishable from the low-Z one. This prototype provides a good platform for our further studies of the physical characteristics and the performances of cosmic ray muon tomography.

  12. High-Performance and Omnidirectional Thin-Film Amorphous Silicon Solar Cell Modules Achieved by 3D Geometry Design.

    PubMed

    Yu, Dongliang; Yin, Min; Lu, Linfeng; Zhang, Hanzhong; Chen, Xiaoyuan; Zhu, Xufei; Che, Jianfei; Li, Dongdong

    2015-11-01

    High-performance thin-film hydrogenated amorphous silicon solar cells are achieved by combining macroscale 3D tubular substrates and nanoscaled 3D cone-like antireflective films. The tubular geometry delivers a series of advantages for large-scale deployment of photovoltaics, such as omnidirectional performance, easier encapsulation, decreased wind resistance, and easy integration with a second device inside the glass tube. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Performance Characterization of Global Address Space Applications: A Case Study with NWChem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Jeffrey R.; Krishnamoorthy, Sriram; Shende, Sameer

    The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays/ ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. Themore » research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.« less

  14. A low-frequency chip-scale optomechanical oscillator with 58 kHz mechanical stiffening and more than 100th-order stable harmonics.

    PubMed

    Huang, Yongjun; Flores, Jaime Gonzalo Flor; Cai, Ziqiang; Yu, Mingbin; Kwong, Dim-Lee; Wen, Guangjun; Churchill, Layne; Wong, Chee Wei

    2017-06-29

    For the sensitive high-resolution force- and field-sensing applications, the large-mass microelectromechanical system (MEMS) and optomechanical cavity have been proposed to realize the sub-aN/Hz 1/2 resolution levels. In view of the optomechanical cavity-based force- and field-sensors, the optomechanical coupling is the key parameter for achieving high sensitivity and resolution. Here we demonstrate a chip-scale optomechanical cavity with large mass which operates at ≈77.7 kHz fundamental mode and intrinsically exhibiting large optomechanical coupling of 44 GHz/nm or more, for both optical resonance modes. The mechanical stiffening range of ≈58 kHz and a more than 100 th -order harmonics are obtained, with which the free-running frequency instability is lower than 10 -6 at 100 ms integration time. Such results can be applied to further improve the sensing performance of the optomechanical inspired chip-scale sensors.

  15. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    NASA Astrophysics Data System (ADS)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  16. Cl-Assisted Large Scale Synthesis of Cm-Scale Buckypapers of Fe₃C-Filled Carbon Nanotubes with Pseudo-Capacitor Properties: The Key Role of SBA-16 Catalyst Support as Synthesis Promoter.

    PubMed

    Boi, Filippo S; He, Yi; Wen, Jiqiu; Wang, Shanling; Yan, Kai; Zhang, Jingdong; Medranda, Daniel; Borowiec, Joanna; Corrias, Anna

    2017-10-23

    We show a novel chemical vapour deposition (CVD) approach, in which the large-scale fabrication of ferromagnetically-filled cm-scale buckypapers is achieved through the deposition of a mesoporous supported catalyst (SBA-16) on a silicon substrate. We demonstrate that SBA-16 has the crucial role of promoting the growth of carbon nanotubes (CNTs) on a horizontal plane with random orientation rather than in a vertical direction, therefore allowing a facile fabrication of cm-scale CNTs buckypapers free from the onion-crust by-product observed on the buckypaper-surface in previous reports. The morphology and composition of the obtained CNTs-buckypapers are analyzed in detail by scanning electron microscopy (SEM), Energy Dispersive X-ray (EDX), transmission electron microscopy (TEM), high resolution TEM (HRTEM), and thermogravimetric analysis (TGA), while structural analysis is performed by Rietveld Refinement of XRD data. The room temperature magnetic properties of the produced buckypapers are also investigated and reveal the presence of a high coercivity of 650 Oe. Additionally, the electrochemical performances of these buckypapers are demonstrated and reveal a behavior that is compatible with that of a pseudo-capacitor (resistive-capacitor) with better performances than those presented in other previously studied layered-buckypapers of Fe-filled CNTs, obtained by pyrolysis of dichlorobenzene-ferrocene mixtures. These measurements indicate that these materials show promise for applications in energy storage systems as flexible electrodes.

  17. Job Management and Task Bundling

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André

    2018-03-01

    High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.

  18. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  19. High areal capacity hybrid magnesium-lithium-ion battery with 99.9% Coulombic efficiency for large-scale energy storage.

    PubMed

    Yoo, Hyun Deog; Liang, Yanliang; Li, Yifei; Yao, Yan

    2015-04-01

    Hybrid magnesium-lithium-ion batteries (MLIBs) featuring dendrite-free deposition of Mg anode and Li-intercalation cathode are safe alternatives to Li-ion batteries for large-scale energy storage. Here we report for the first time the excellent stability of a high areal capacity MLIB cell and dendrite-free deposition behavior of Mg under high current density (2 mA cm(-2)). The hybrid cell showed no capacity loss for 100 cycles with Coulombic efficiency as high as 99.9%, whereas the control cell with a Li-metal anode only retained 30% of its original capacity with Coulombic efficiency well below 90%. The use of TiS2 as a cathode enabled the highest specific capacity and one of the best rate performances among reported MLIBs. Postmortem analysis of the cycled cells revealed dendrite-free Mg deposition on a Mg anode surface, while mossy Li dendrites were observed covering the Li surface and penetrated into separators in the Li cell. The energy density of a MLIB could be further improved by developing electrolytes with higher salt concentration and wider electrochemical window, leading to new opportunities for its application in large-scale energy storage.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Ji Sung; Koo, Eunmo; Munoz-Esparza, Domingo

    High-resolution large-eddy simulation of the flow over a large wind farm (64 wind turbines) is performed using the HIGRAD/FIRETEC-WindBlade model, which is a high-performance computing wind turbine–atmosphere interaction model that uses the Lagrangian actuator line method to represent rotating turbine blades. These high-resolution large-eddy simulation results are used to parameterize the thrust and power coefficients that contain information about turbine interference effects within the wind farm. Those coefficients are then incorporated into the WRF (Weather Research and Forecasting) model in order to evaluate interference effects in larger-scale models. In the high-resolution WindBlade wind farm simulation, insufficient distance between turbines createsmore » the interference between turbines, including significant vertical variations in momentum and turbulent intensity. The characteristics of the wake are further investigated by analyzing the distribution of the vorticity and turbulent intensity. Quadrant analysis in the turbine and post-turbine areas reveals that the ejection motion induced by the presence of the wind turbines is dominant compared to that in the other quadrants, indicating that the sweep motion is increased at the location where strong wake recovery occurs. Regional-scale WRF simulations reveal that although the turbulent mixing induced by the wind farm is partly diffused to the upper region, there is no significant change in the boundary layer depth. The velocity deficit does not appear to be very sensitive to the local distribution of turbine coefficients. However, differences of about 5% on parameterized turbulent kinetic energy were found depending on the turbine coefficient distribution. Furthermore, turbine coefficients that consider interference in the wind farm should be used in wind farm parameterization for larger-scale models to better describe sub-grid scale turbulent processes.« less

  1. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  2. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  3. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carns, Philip; Harms, Kevin; Jenkins, John

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less

  4. Biomimetic nanocoatings with exceptional mechanical, barrier, and flame-retardant properties from large-scale one-step coassembly

    PubMed Central

    Ding, Fuchuan; Liu, Jingjing; Zeng, Songshan; Xia, Yan; Wells, Kacie M.; Nieh, Mu-Ping; Sun, Luyi

    2017-01-01

    Large-scale biomimetic organic/inorganic hybrid nanocoatings with a nacre-like microstructure were prepared via a facile coassembly process. Different from conventional polymer nanocomposites, these nanocoatings contain a high concentration of nanosheets, which can be well aligned along the substrate surface. Moreover, the nanosheets and polymer matrix can be chemically co–cross-linked. As a result, the nanocoatings exhibit exceptional mechanical properties (high stiffness and strength), barrier properties (to both oxygen and water vapor), and flame retardancy, but they are also highly transparent (maintaining more than 85% of their original transmittance to visible light). The nanocoatings can be applied to various substrates and regular or irregular surfaces (for example, films and foams). Because of their excellent performance and high versatility, these nanocoatings are expected to find widespread application. PMID:28776038

  5. Research on solar pumped liquid lasers

    NASA Technical Reports Server (NTRS)

    Cox, J. D.; Kurzweg, U. H.; Weinstein, N. H.; Schneider, R. T.

    1985-01-01

    A solar pumped liquid laser that can be scaled up to high power (10 mW CW) for space applications was developed. Liquid lasers have the advantage over gases in that they provide much higher lasant densities and thus high-power densities. Liquids also have advantages over solids in that they have much higher damage thresholds and are much cheaper to produce for large scale applications. Among the liquid laser media that are potential candidates for solar pumping, the POC13: Nd sup 3+:ZrC14 liquid was chosen for its high intrinsic efficiency and its relatively good stability against decomposition due to protic contamination. The development of a manufacturing procedure and performance testing of the laser, liquid and the development of an inexpensive large solar concentrator to pump the laser are examined.

  6. Genome wide analysis of flowering time trait in multiple environments via high-throughput genotyping technique in Brassica napus L.

    PubMed

    Li, Lun; Long, Yan; Zhang, Libin; Dalton-Morgan, Jessica; Batley, Jacqueline; Yu, Longjiang; Meng, Jinling; Li, Maoteng

    2015-01-01

    The prediction of the flowering time (FT) trait in Brassica napus based on genome-wide markers and the detection of underlying genetic factors is important not only for oilseed producers around the world but also for the other crop industry in the rotation system in China. In previous studies the low density and mixture of biomarkers used obstructed genomic selection in B. napus and comprehensive mapping of FT related loci. In this study, a high-density genome-wide SNP set was genotyped from a double-haploid population of B. napus. We first performed genomic prediction of FT traits in B. napus using SNPs across the genome under ten environments of three geographic regions via eight existing genomic predictive models. The results showed that all the models achieved comparably high accuracies, verifying the feasibility of genomic prediction in B. napus. Next, we performed a large-scale mapping of FT related loci among three regions, and found 437 associated SNPs, some of which represented known FT genes, such as AP1 and PHYE. The genes tagged by the associated SNPs were enriched in biological processes involved in the formation of flowers. Epistasis analysis showed that significant interactions were found between detected loci, even among some known FT related genes. All the results showed that our large scale and high-density genotype data are of great practical and scientific values for B. napus. To our best knowledge, this is the first evaluation of genomic selection models in B. napus based on a high-density SNP dataset and large-scale mapping of FT loci.

  7. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  8. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  9. ATLAS and LHC computing on CRAY

    NASA Astrophysics Data System (ADS)

    Sciacca, F. G.; Haug, S.; ATLAS Collaboration

    2017-10-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  10. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  11. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  12. Graphene/MoS2 hybrid technology for large-scale two-dimensional electronics.

    PubMed

    Yu, Lili; Lee, Yi-Hsien; Ling, Xi; Santos, Elton J G; Shin, Yong Cheol; Lin, Yuxuan; Dubey, Madan; Kaxiras, Efthimios; Kong, Jing; Wang, Han; Palacios, Tomás

    2014-06-11

    Two-dimensional (2D) materials have generated great interest in the past few years as a new toolbox for electronics. This family of materials includes, among others, metallic graphene, semiconducting transition metal dichalcogenides (such as MoS2), and insulating boron nitride. These materials and their heterostructures offer excellent mechanical flexibility, optical transparency, and favorable transport properties for realizing electronic, sensing, and optical systems on arbitrary surfaces. In this paper, we demonstrate a novel technology for constructing large-scale electronic systems based on graphene/molybdenum disulfide (MoS2) heterostructures grown by chemical vapor deposition. We have fabricated high-performance devices and circuits based on this heterostructure, where MoS2 is used as the transistor channel and graphene as contact electrodes and circuit interconnects. We provide a systematic comparison of the graphene/MoS2 heterojunction contact to more traditional MoS2-metal junctions, as well as a theoretical investigation, using density functional theory, of the origin of the Schottky barrier height. The tunability of the graphene work function with electrostatic doping significantly improves the ohmic contact to MoS2. These high-performance large-scale devices and circuits based on this 2D heterostructure pave the way for practical flexible transparent electronics.

  13. Guided Growth of Horizontal ZnSe Nanowires and their Integration into High-Performance Blue-UV Photodetectors.

    PubMed

    Oksenberg, Eitan; Popovitz-Biro, Ronit; Rechav, Katya; Joselevich, Ernesto

    2015-07-15

    Perfectly aligned horizontal ZnSe nano-wires are obtained by guided growth, and easily integrated into high-performance blue-UV photodetectors. Their crystal phase and crystallographic orientation are controlled by the epitaxial relations with six different sapphire planes. Guided growth paves the way for the large-scale integration of nanowires into optoelectronic devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  15. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  16. Experimental study of detonation of large-scale powder-droplet-vapor mixtures

    NASA Astrophysics Data System (ADS)

    Bai, C.-H.; Wang, Y.; Xue, K.; Wang, L.-F.

    2018-05-01

    Large-scale experiments were carried out to investigate the detonation performance of a 1600-m3 ternary cloud consisting of aluminum powder, fuel droplets, and vapor, which were dispersed by a central explosive in a cylindrically stratified configuration. High-frame-rate video cameras and pressure gauges were used to analyze the large-scale explosive dispersal of the mixture and the ensuing blast wave generated by the detonation of the cloud. Special attention was focused on the effect of the descending motion of the charge on the detonation performance of the dispersed ternary cloud. The charge was parachuted by an ensemble of apparatus from the designated height in order to achieve the required terminal velocity when the central explosive was detonated. A descending charge with a terminal velocity of 32 m/s produced a cloud with discernably increased concentration compared with that dispersed from a stationary charge, the detonation of which hence generates a significantly enhanced blast wave beyond the scaled distance of 6 m/kg^{1/3}. The results also show the influence of the descending motion of the charge on the jetting phenomenon and the distorted shock front.

  17. Channel optimization of high-intensity laser beams in millimeter-scale plasmas.

    PubMed

    Ceurvorst, L; Savin, A; Ratan, N; Kasim, M F; Sadler, J; Norreys, P A; Habara, H; Tanaka, K A; Zhang, S; Wei, M S; Ivancic, S; Froula, D H; Theobald, W

    2018-04-01

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>10^{18}W/cm^{2}) kilojoule laser pulses through large density scale length (∼390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.

  18. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    NASA Astrophysics Data System (ADS)

    Ceurvorst, L.; Savin, A.; Ratan, N.; Kasim, M. F.; Sadler, J.; Norreys, P. A.; Habara, H.; Tanaka, K. A.; Zhang, S.; Wei, M. S.; Ivancic, S.; Froula, D. H.; Theobald, W.

    2018-04-01

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>1018W/cm 2 ) kilojoule laser pulses through large density scale length (˜390 -570 μ m ) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.

  19. Predicting the propagation of concentration and saturation fronts in fixed-bed filters.

    PubMed

    Callery, O; Healy, M G

    2017-10-15

    The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. A transportable Paul-trap for levitation and accurate positioning of micron-scale particles in vacuum for laser-plasma experiments

    NASA Astrophysics Data System (ADS)

    Ostermayr, T. M.; Gebhard, J.; Haffa, D.; Kiefer, D.; Kreuzer, C.; Allinger, K.; Bömer, C.; Braenzel, J.; Schnürer, M.; Cermak, I.; Schreiber, J.; Hilz, P.

    2018-01-01

    We report on a Paul-trap system with large access angles that allows positioning of fully isolated micrometer-scale particles with micrometer precision as targets in high-intensity laser-plasma interactions. This paper summarizes theoretical and experimental concepts of the apparatus as well as supporting measurements that were performed for the trapping process of single particles.

  1. Cognitive Model Exploration and Optimization: A New Challenge for Computational Science

    DTIC Science & Technology

    2010-03-01

    the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post

  2. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  3. Zebrafish Whole-Adult-Organism Chemogenomics for Large-Scale Predictive and Discovery Chemical Biology

    PubMed Central

    Lam, Siew Hong; Mathavan, Sinnakarupan; Tong, Yan; Li, Haixia; Karuturi, R. Krishna Murthy; Wu, Yilian; Vega, Vinsensius B.; Liu, Edison T.; Gong, Zhiyuan

    2008-01-01

    The ability to perform large-scale, expression-based chemogenomics on whole adult organisms, as in invertebrate models (worm and fly), is highly desirable for a vertebrate model but its feasibility and potential has not been demonstrated. We performed expression-based chemogenomics on the whole adult organism of a vertebrate model, the zebrafish, and demonstrated its potential for large-scale predictive and discovery chemical biology. Focusing on two classes of compounds with wide implications to human health, polycyclic (halogenated) aromatic hydrocarbons [P(H)AHs] and estrogenic compounds (ECs), we generated robust prediction models that can discriminate compounds of the same class from those of different classes in two large independent experiments. The robust expression signatures led to the identification of biomarkers for potent aryl hydrocarbon receptor (AHR) and estrogen receptor (ER) agonists, respectively, and were validated in multiple targeted tissues. Knowledge-based data mining of human homologs of zebrafish genes revealed highly conserved chemical-induced biological responses/effects, health risks, and novel biological insights associated with AHR and ER that could be inferred to humans. Thus, our study presents an effective, high-throughput strategy of capturing molecular snapshots of chemical-induced biological states of a whole adult vertebrate that provides information on biomarkers of effects, deregulated signaling pathways, and possible affected biological functions, perturbed physiological systems, and increased health risks. These findings place zebrafish in a strategic position to bridge the wide gap between cell-based and rodent models in chemogenomics research and applications, especially in preclinical drug discovery and toxicology. PMID:18618001

  4. The NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform to Support the Analysis of Petascale Environmental Data Collections

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Pugh, T.; Wyborn, L. A.; Porter, D.; Allen, C.; Smillie, J.; Antony, J.; Trenham, C.; Evans, B. J.; Beckett, D.; Erwin, T.; King, E.; Hodge, J.; Woodcock, R.; Fraser, R.; Lescinsky, D. T.

    2014-12-01

    The National Computational Infrastructure (NCI) has co-located a priority set of national data assets within a HPC research platform. This powerful in-situ computational platform has been created to help serve and analyse the massive amounts of data across the spectrum of environmental collections - in particular the climate, observational data and geoscientific domains. This paper examines the infrastructure, innovation and opportunity for this significant research platform. NCI currently manages nationally significant data collections (10+ PB) categorised as 1) earth system sciences, climate and weather model data assets and products, 2) earth and marine observations and products, 3) geosciences, 4) terrestrial ecosystem, 5) water management and hydrology, and 6) astronomy, social science and biosciences. The data is largely sourced from the NCI partners (who include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. By co-locating these large valuable data assets, new opportunities have arisen by harmonising the data collections, making a powerful transdisciplinary research platformThe data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. New scientific software, cloud-scale techniques, server-side visualisation and data services have been harnessed and integrated into the platform, so that analysis is performed seamlessly across the traditional boundaries of the underlying data domains. Characterisation of the techniques along with performance profiling ensures scalability of each software component, all of which can either be enhanced or replaced through future improvements. A Development-to-Operations (DevOps) framework has also been implemented to manage the scale of the software complexity alone. This ensures that software is both upgradable and maintainable, and can be readily reused with complexly integrated systems and become part of the growing global trusted community tools for cross-disciplinary research.

  5. Large-scale network integration in the human brain tracks temporal fluctuations in memory encoding performance.

    PubMed

    Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi

    2018-06-18

    Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.

  6. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  7. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  8. A novel computational approach towards the certification of large-scale boson sampling

    NASA Astrophysics Data System (ADS)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  9. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.

  10. Sachem: a chemical cartridge for high-performance substructure search.

    PubMed

    Kratochvíl, Miroslav; Vondrášek, Jiří; Galgonek, Jakub

    2018-05-23

    Structure search is one of the valuable capabilities of small-molecule databases. Fingerprint-based screening methods are usually employed to enhance the search performance by reducing the number of calls to the verification procedure. In substructure search, fingerprints are designed to capture important structural aspects of the molecule to aid the decision about whether the molecule contains a given substructure. Currently available cartridges typically provide acceptable search performance for processing user queries, but do not scale satisfactorily with dataset size. We present Sachem, a new open-source chemical cartridge that implements two substructure search methods: The first is a performance-oriented reimplementation of substructure indexing based on the OrChem fingerprint, and the second is a novel method that employs newly designed fingerprints stored in inverted indices. We assessed the performance of both methods on small, medium, and large datasets containing 1, 10, and 94 million compounds, respectively. Comparison of Sachem with other freely available cartridges revealed improvements in overall performance, scaling potential and screen-out efficiency. The Sachem cartridge allows efficient substructure searches in databases of all sizes. The sublinear performance scaling of the second method and the ability to efficiently query large amounts of pre-extracted information may together open the door to new applications for substructure searches.

  11. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  12. Large-scale synthesis of hybrid metal oxides through metal redox mechanism for high-performance pseudocapacitors

    PubMed Central

    Ren, Zhonghua; Li, Jianpeng; Ren, Yaqi; Wang, Shuguang; Qiu, Yejun; Yu, Jie

    2016-01-01

    Electrochemical performance and production cost are the main concerns for the practical application of supercapacitors. Here we report a simple and universally applicable method to prepare hybrid metal oxides by metal redox reaction utilizing the inherent reducibility of metals and oxidbility of for the first time. As an example, Ni(OH)2/MnO2 hybrid nanosheets (NMNSs) are grown for supercapacitor application by self-reaction of Ni foam substrates in KMnO4 solution at room temperature. The obtained hybrid nanosheets exhibit high specific capacitance (2,937 F g−1). The assembled solid-state asymmetric pseudocapacitors possess ultrahigh energy density of 91.13 Wh kg−1 (at the power density of 750 W kg−1) and extraordinary cycling stability with 92.28% capacitance retention after 25,000 cycles. Co(OH)2/MnO2 and Fe2O3/MnO2 hybrid oxides are also synthesized through this metal redox mechanism. This green and low-cost method is capable of large-scale production and one-step preparation of the electrodes, holding promise for practical application of high-performance pseudocapacitors. PMID:26805027

  13. Unleashing elastic energy: dynamics of energy release in rubber bands and impulsive biological systems

    NASA Astrophysics Data System (ADS)

    Ilton, Mark; Cox, Suzanne; Egelmeers, Thijs; Patek, S. N.; Crosby, Alfred J.

    Impulsive biological systems - which include mantis shrimp, trap-jaw ants, and venus fly traps - can reach high speeds by using elastic elements to store and rapidly release energy. The material behavior and shape changes critical to achieving rapid energy release in these systems are largely unknown due to limitations of materials testing instruments operating at high speed and large displacement. In this work, we perform fundamental, proof-of-concept measurements on the tensile retraction of elastomers. Using high speed imaging, the kinematics of retraction are measured for elastomers with varying mechanical properties and geometry. Based on the kinematics, the rate of energy dissipation in the material is determined as a function of strain and strain-rate, along with a scaling relation which describes the dependence of maximum velocity on material properties. Understanding this scaling relation along with the material failure limits of the elastomer allows the prediction of material properties required for optimal performance. We demonstrate this concept experimentally by optimizing for maximum velocity in our synthetic model system, and achieve retraction velocities that exceed those in biological impulsive systems. This model system provides a foundation for future work connecting continuum performance to molecular architecture in impulsive systems.

  14. Cells as advanced therapeutics: State-of-the-art, challenges, and opportunities in large scale biomanufacturing of high-quality cells for adoptive immunotherapies.

    PubMed

    Dwarshuis, Nate J; Parratt, Kirsten; Santiago-Miranda, Adriana; Roy, Krishnendu

    2017-05-15

    Therapeutic cells hold tremendous promise in treating currently incurable, chronic diseases since they perform multiple, integrated, complex functions in vivo compared to traditional small-molecule drugs or biologics. However, they also pose significant challenges as therapeutic products because (a) their complex mechanisms of actions are difficult to understand and (b) low-cost bioprocesses for large-scale, reproducible manufacturing of cells have yet to be developed. Immunotherapies using T cells and dendritic cells (DCs) have already shown great promise in treating several types of cancers, and human mesenchymal stromal cells (hMSCs) are now extensively being evaluated in clinical trials as immune-modulatory cells. Despite these exciting developments, the full potential of cell-based therapeutics cannot be realized unless new engineering technologies enable cost-effective, consistent manufacturing of high-quality therapeutic cells at large-scale. Here we review cell-based immunotherapy concepts focused on the state-of-the-art in manufacturing processes including cell sourcing, isolation, expansion, modification, quality control (QC), and culture media requirements. We also offer insights into how current technologies could be significantly improved and augmented by new technologies, and how disciplines must converge to meet the long-term needs for large-scale production of cell-based immunotherapies. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Transparent and Flexible Large-scale Graphene-based Heater

    NASA Astrophysics Data System (ADS)

    Kang, Junmo; Lee, Changgu; Kim, Young-Jin; Choi, Jae-Boong; Hong, Byung Hee

    2011-03-01

    We report the application of transparent and flexible heater with high optical transmittance and low sheet resistance using graphene films, showing outstanding thermal and electrical properties. The large-scale graphene films were grown on Cu foil by chemical vapor deposition methods, and transferred to transparent substrates by multiple stacking. The wet chemical doping process enhanced the electrical properties, showing a sheet resistance as low as 35 ohm/sq with 88.5 % transmittance. The temperature response usually depends on the dimension and the sheet resistance of the graphene-based heater. We show that a 4x4 cm2 heater can reach 80& circ; C within 40 seconds and large-scale (9x9 cm2) heater shows uniformly heating performance, which was measured using thermocouple and infra-red camera. These heaters would be very useful for defogging systems and smart windows.

  16. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  17. On mechanics and material length scales of failure in heterogeneous interfaces using a finite strain high performance solver

    NASA Astrophysics Data System (ADS)

    Mosby, Matthew; Matouš, Karel

    2015-12-01

    Three-dimensional simulations capable of resolving the large range of spatial scales, from the failure-zone thickness up to the size of the representative unit cell, in damage mechanics problems of particle reinforced adhesives are presented. We show that resolving this wide range of scales in complex three-dimensional heterogeneous morphologies is essential in order to apprehend fracture characteristics, such as strength, fracture toughness and shape of the softening profile. Moreover, we show that computations that resolve essential physical length scales capture the particle size-effect in fracture toughness, for example. In the vein of image-based computational materials science, we construct statistically optimal unit cells containing hundreds to thousands of particles. We show that these statistically representative unit cells are capable of capturing the first- and second-order probability functions of a given data-source with better accuracy than traditional inclusion packing techniques. In order to accomplish these large computations, we use a parallel multiscale cohesive formulation and extend it to finite strains including damage mechanics. The high-performance parallel computational framework is executed on up to 1024 processing cores. A mesh convergence and a representative unit cell study are performed. Quantifying the complex damage patterns in simulations consisting of tens of millions of computational cells and millions of highly nonlinear equations requires data-mining the parallel simulations, and we propose two damage metrics to quantify the damage patterns. A detailed study of volume fraction and filler size on the macroscopic traction-separation response of heterogeneous adhesives is presented.

  18. Assessing the performance of multi-purpose channel management measures at increasing scales

    NASA Astrophysics Data System (ADS)

    Wilkinson, Mark; Addy, Steve

    2016-04-01

    In addition to hydroclimatic drivers, sediment deposition from high energy river systems can reduce channel conveyance capacity and lead to significant increases in flood risk. There is an increasing recognition that we need to work with the interplay of natural hydrological and morphological processes in order to attenuate flood flows and manage sediment (both coarse and fine). This typically includes both catchment (e.g. woodland planting, wetlands) and river (e.g. wood placement, floodplain reconnection) restoration approaches. The aim of this work was to assess at which scales channel management measures (notably wood placement and flood embankment removal) are most appropriate for flood and sediment management in high energy upland river systems. We present research findings from two densely instrumented research sites in Scotland which regularly experience flood events and have associated coarse sediment problems. We assessed the performance of a range of novel trial measures for three different scales: wooded flow restrictors and gully tree planting at the small scale (<1 km2), floodplain tree planting and engineered log jams at the intermediate scale (5-60 km2), and flood embankment lowering at the large scale (350 km2). Our results suggest that at the smallest scale, care is needed in the installation of flow restrictors. It was found for some restrictors that vertical erosion can occur if the tributary channel bed is disturbed. Preliminary model evidence suggested they have a very limited impact on channel discharge and flood peak delay owing to the small storage areas behind the structures. At intermediate scales, the ability to trap sediment by engineered log jams was limited. Of the 45 engineered log jams installed, around half created a small geomorphic response and only 5 captured a significant amount of coarse material (during one large flood event). As scale increases, the chance of damage or loss of wood placement is greatest. Monitoring highlights the importance of structure design (porosity and degree of channel blockage) and placement in zones of high sediment transport to optimise performance. At the large scale, well designed flood embankment lowering can improve connectivity to the floodplain during low to medium return period events. However, ancillary works to stabilise the bank failed thus emphasising the importance of letting natural processes readjust channel morphology and hydrological connections to the floodplain. Although these trial measures demonstrated limited effects, this may be in part owing to restrictions in the range of hydroclimatological conditions during the study period and further work is needed to assess the performance under more extreme conditions. This work will contribute to refining guidance for managing channel coarse sediment problems in the future which in turn could help mitigate flooding using natural approaches.

  19. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    PubMed Central

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  20. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  1. Quantitative Large-Scale Three-Dimensional Imaging of Human Kidney Biopsies: A Bridge to Precision Medicine in Kidney Disease.

    PubMed

    Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M

    2018-06-05

    Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.

  2. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  3. High-frequency self-aligned graphene transistors with transferred gate stacks

    PubMed Central

    Cheng, Rui; Bai, Jingwei; Liao, Lei; Zhou, Hailong; Chen, Yu; Liu, Lixin; Lin, Yung-Chen; Jiang, Shan; Huang, Yu; Duan, Xiangfeng

    2012-01-01

    Graphene has attracted enormous attention for radio-frequency transistor applications because of its exceptional high carrier mobility, high carrier saturation velocity, and large critical current density. Herein we report a new approach for the scalable fabrication of high-performance graphene transistors with transferred gate stacks. Specifically, arrays of gate stacks are first patterned on a sacrificial substrate, and then transferred onto arbitrary substrates with graphene on top. A self-aligned process, enabled by the unique structure of the transferred gate stacks, is then used to position precisely the source and drain electrodes with minimized access resistance or parasitic capacitance. This process has therefore enabled scalable fabrication of self-aligned graphene transistors with unprecedented performance including a record-high cutoff frequency up to 427 GHz. Our study defines a unique pathway to large-scale fabrication of high-performance graphene transistors, and holds significant potential for future application of graphene-based devices in ultra–high-frequency circuits. PMID:22753503

  4. Carbon nanotube circuit integration up to sub-20 nm channel lengths.

    PubMed

    Shulaker, Max Marcel; Van Rethy, Jelle; Wu, Tony F; Liyanage, Luckshitha Suriyasena; Wei, Hai; Li, Zuanyi; Pop, Eric; Gielen, Georges; Wong, H-S Philip; Mitra, Subhasish

    2014-04-22

    Carbon nanotube (CNT) field-effect transistors (CNFETs) are a promising emerging technology projected to achieve over an order of magnitude improvement in energy-delay product, a metric of performance and energy efficiency, compared to silicon-based circuits. However, due to substantial imperfections inherent with CNTs, the promise of CNFETs has yet to be fully realized. Techniques to overcome these imperfections have yielded promising results, but thus far only at large technology nodes (1 μm device size). Here we demonstrate the first very large scale integration (VLSI)-compatible approach to realizing CNFET digital circuits at highly scaled technology nodes, with devices ranging from 90 nm to sub-20 nm channel lengths. We demonstrate inverters functioning at 1 MHz and a fully integrated CNFET infrared light sensor and interface circuit at 32 nm channel length. This demonstrates the feasibility of realizing more complex CNFET circuits at highly scaled technology nodes.

  5. McrEngine: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression

    DOE PAGES

    Islam, Tanzima Zerin; Mohror, Kathryn; Bagchi, Saurabh; ...

    2013-01-01

    High performance computing (HPC) systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS). As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. McrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility ofmore » checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.« less

  6. Nanomanufacturing : nano-structured materials made layer-by-layer.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, James V.; Cheng, Shengfeng; Grest, Gary Stephen

    Large-scale, high-throughput production of nano-structured materials (i.e. nanomanufacturing) is a strategic area in manufacturing, with markets projected to exceed $1T by 2015. Nanomanufacturing is still in its infancy; process/product developments are costly and only touch on potential opportunities enabled by growing nanoscience discoveries. The greatest promise for high-volume manufacturing lies in age-old coating and imprinting operations. For materials with tailored nm-scale structure, imprinting/embossing must be achieved at high speeds (roll-to-roll) and/or over large areas (batch operation) with feature sizes less than 100 nm. Dispersion coatings with nanoparticles can also tailor structure through self- or directed-assembly. Layering films structured with thesemore » processes have tremendous potential for efficient manufacturing of microelectronics, photovoltaics and other topical nano-structured devices. This project is designed to perform the requisite R and D to bring Sandia's technology base in computational mechanics to bear on this scale-up problem. Project focus is enforced by addressing a promising imprinting process currently being commercialized.« less

  7. Optimization and Scale-up of Inulin Extraction from Taraxacum kok-saghyz roots.

    PubMed

    Hahn, Thomas; Klemm, Andrea; Ziesse, Patrick; Harms, Karsten; Wach, Wolfgang; Rupp, Steffen; Hirth, Thomas; Zibek, Susanne

    2016-05-01

    The optimization and scale-up of inulin extraction from Taraxacum kok-saghyz Rodin was successfully performed. Evaluating solubility investigations, the extraction temperature was fixed at 85 degrees C. The inulin stability regarding degradation or hydrolysis could be confirmed by extraction in the presence of model inulin. Confirming stability at the given conditions the isolation procedure was transferred from a 1 L- to a 1 m3-reactor. The Reynolds number was selected as the relevant dimensionless number that has to remain constant in both scales. The stirrer speed in the large scale was adjusted to 3.25 rpm regarding a 300 rpm stirrer speed in the 1 L-scale and relevant physical and process engineering parameters. Assumptions were confirmed by approximately homologous extraction kinetics in both scales. Since T. kok-saghyz is in the focus of research due to its rubber content side-product isolation from residual biomass it is of great economic interest. Inulin is one of these additional side-products that can be isolated in high quantity (- 35% of dry mass) and with a high average degree of polymerization (15.5) in large scale with a purity of 77%.

  8. Growth of 2D Materials and Application in Electrochemical Energy Conversion

    NASA Astrophysics Data System (ADS)

    Ye, Gonglan

    The discovery of graphene in 2004 has generated numerous interests among scientists for graphene's versatile potentials. The enthusiasm for graphene has recently been extended to other members of two-dimensional (2D) materials for applications in electronics, optoelectronics, and catalysis. Different from graphene, atomically-thin transition metal dichalcogenides (TMDs) have varied band gaps and would benefit for applications in the semiconductor industry. One of the promising applications of 2D TMDs is for 2D integrated circuits to replace current Si based electronics. In addition to electronic applications, 2D materials are also good candidates for electrochemical energy storage and conversion due to their large surface area and atomic thickness. This thesis mainly focuses on the synthesis of 2D materials and their application in energy conversion. Firstly, we focus on the synthesis of two-dimensional Tin Disulfide (SnS2). SnS2 is considered to be a novel material in 2D family. 2D SnS2 has a large band gap ( 2.8 eV) and high carrier mobility, which makes it a potential applicant for electronics. Monolayer SnS2 with large scale and high crystal quality was successfully synthesized by chemical vapor deposition (CVD), and its performance as a photodetector was examined. The next chapter demonstrated a generic method for growing millimeter-scale single crystals as well as wafer-scale thin films of TMDs. This generic method was obtained by studying the precursors' behavior and the flow dynamics during the CVD process of growing MoSe2, and was extended to other TMD layers such as millimeter-scale WSe2 single crystals. Understanding the growth processes of high quality large area monolayers of TMDs is crucial for further fundamental research as well as future development for scalable complex electronics. Besides the synthesis of 2D materials with high qualities, we further explored the relationship between defects and electrochemical properties. By directly observing and correlating the microscale structural changes of TMD monolayers such as MoS2 to the catalytic properties, we were able to provide insight on the fundamental catalytic mechanism for hydrogen evolution reaction. Finally, we used the 2D materials to build up 3D architectures, showing excellent performance in energy storage and conversion. For example, we used graphene as a conductive scaffold to support vanadium oxide (V 2O5) on nanoscale, and achieved high performances for supercapacitors. Also, we applied the Pt anchored N-doped graphene nanoribbons as the catalyst for methanol electro oxidation, and reported the best performance among Pt/Carbon-based catalysts.

  9. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  10. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  11. Variational study on the vibrational level structure and vibrational level mixing of highly vibrationally excited S₀ D₂CO.

    PubMed

    Rashev, Svetoslav; Moule, David C; Rashev, Vladimir

    2012-11-01

    We perform converged high precision variational calculations to determine the frequencies of a large number of vibrational levels in S(0) D(2)CO, extending from low to very high excess vibrational energies. For the calculations we use our specific vibrational method (recently employed for studies on H(2)CO), consisting of a combination of a search/selection algorithm and a Lanczos iteration procedure. Using the same method we perform large scale converged calculations on the vibrational level spectral structure and fragmentation at selected highly excited overtone states, up to excess vibrational energies of ∼17,000 cm(-1), in order to study the characteristics of intramolecular vibrational redistribution (IVR), vibrational level density and mode selectivity. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borghesi, Giulio; Bellan, Josette, E-mail: josette.bellan@jpl.nasa.gov; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109-8099

    2015-03-15

    A Direct Numerical Simulation (DNS) database was created representing mixing of species under high-pressure conditions. The configuration considered is that of a temporally evolving mixing layer. The database was examined and analyzed for the purpose of modeling some of the unclosed terms that appear in the Large Eddy Simulation (LES) equations. Several metrics are used to understand the LES modeling requirements. First, a statistical analysis of the DNS-database large-scale flow structures was performed to provide a metric for probing the accuracy of the proposed LES models as the flow fields obtained from accurate LESs should contain structures of morphology statisticallymore » similar to those observed in the filtered-and-coarsened DNS (FC-DNS) fields. To characterize the morphology of the large-scales structures, the Minkowski functionals of the iso-surfaces were evaluated for two different fields: the second-invariant of the rate of deformation tensor and the irreversible entropy production rate. To remove the presence of the small flow scales, both of these fields were computed using the FC-DNS solutions. It was found that the large-scale structures of the irreversible entropy production rate exhibit higher morphological complexity than those of the second invariant of the rate of deformation tensor, indicating that the burden of modeling will be on recovering the thermodynamic fields. Second, to evaluate the physical effects which must be modeled at the subfilter scale, an a priori analysis was conducted. This a priori analysis, conducted in the coarse-grid LES regime, revealed that standard closures for the filtered pressure, the filtered heat flux, and the filtered species mass fluxes, in which a filtered function of a variable is equal to the function of the filtered variable, may no longer be valid for the high-pressure flows considered in this study. The terms requiring modeling are the filtered pressure, the filtered heat flux, the filtered pressure work, and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.« less

  13. Assessment of aerodynamic performance of V/STOL and STOVL fighter aircraft

    NASA Technical Reports Server (NTRS)

    Nelms, W. P.

    1984-01-01

    The aerodynamic performance of V/STOL and STOVL fighter/attack aircraft was assessed. Aerodynamic and propulsion/airframe integration activities are described and small and large scale research programs are considered. Uncertainties affecting aerodynamic performance that are associated with special configuration features resulting from the V/STOL requirement are addressed. Example uncertainties relate to minimum drag, wave drag, high angle of attack characteristics, and power induced effects.

  14. Computational biomedicine: a challenge for the twenty-first century.

    PubMed

    Coveney, Peter V; Shublaq, Nour W

    2012-01-01

    With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.

  15. Fault-tolerant Control of a Cyber-physical System

    NASA Astrophysics Data System (ADS)

    Roxana, Rusu-Both; Eva-Henrietta, Dulf

    2017-10-01

    Cyber-physical systems represent a new emerging field in automatic control. The fault system is a key component, because modern, large scale processes must meet high standards of performance, reliability and safety. Fault propagation in large scale chemical processes can lead to loss of production, energy, raw materials and even environmental hazard. The present paper develops a multi-agent fault-tolerant control architecture using robust fractional order controllers for a (13C) cryogenic separation column cascade. The JADE (Java Agent DEvelopment Framework) platform was used to implement the multi-agent fault tolerant control system while the operational model of the process was implemented in Matlab/SIMULINK environment. MACSimJX (Multiagent Control Using Simulink with Jade Extension) toolbox was used to link the control system and the process model. In order to verify the performance and to prove the feasibility of the proposed control architecture several fault simulation scenarios were performed.

  16. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    PubMed Central

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  17. Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load

    NASA Astrophysics Data System (ADS)

    Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.

    2008-09-01

    A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.

  18. Visualization of the Eastern Renewable Generation Integration Study: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron

    The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less

  19. Evaluation of high-level clouds in cloud resolving model simulations with ARM and KWAJEX observations

    DOE PAGES

    Liu, Zheng; Muhlbauer, Andreas; Ackerman, Thomas

    2015-11-05

    In this paper, we evaluate high-level clouds in a cloud resolving model during two convective cases, ARM9707 and KWAJEX. The simulated joint histograms of cloud occurrence and radar reflectivity compare well with cloud radar and satellite observations when using a two-moment microphysics scheme. However, simulations performed with a single moment microphysical scheme exhibit low biases of approximately 20 dB. During convective events, two-moment microphysical overestimate the amount of high-level cloud and one-moment microphysics precipitate too readily and underestimate the amount and height of high-level cloud. For ARM9707, persistent large positive biases in high-level cloud are found, which are not sensitivemore » to changes in ice particle fall velocity and ice nuclei number concentration in the two-moment microphysics. These biases are caused by biases in large-scale forcing and maintained by the periodic lateral boundary conditions. The combined effects include significant biases in high-level cloud amount, radiation, and high sensitivity of cloud amount to nudging time scale in both convective cases. The high sensitivity of high-level cloud amount to the thermodynamic nudging time scale suggests that thermodynamic nudging can be a powerful ‘‘tuning’’ parameter for the simulated cloud and radiation but should be applied with caution. The role of the periodic lateral boundary conditions in reinforcing the biases in cloud and radiation suggests that reducing the uncertainty in the large-scale forcing in high levels is important for similar convective cases and has far reaching implications for simulating high-level clouds in super-parameterized global climate models such as the multiscale modeling framework.« less

  20. Implementation and Performance of GaAs Digital Signal Processing ASICs

    NASA Technical Reports Server (NTRS)

    Whitaker, William D.; Buchanan, Jeffrey R.; Burke, Gary R.; Chow, Terrance W.; Graham, J. Scott; Kowalski, James E.; Lam, Barbara; Siavoshi, Fardad; Thompson, Matthew S.; Johnson, Robert A.

    1993-01-01

    The feasibility of performing high speed digital signal processing in GaAs gate array technology has been demonstrated with the successful implementation of a VLSI communications chip set for NASA's Deep Space Network. This paper describes the techniques developed to solve some of the technology and implementation problems associated with large scale integration of GaAs gate arrays.

  1. Research on solar pumped liquid lasers. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, J.D.; Kurzweg, U.H.; Weinstein, N.H.

    1985-04-01

    A solar pumped liquid laser that can be scaled up to high power (10 mW CW) for space applications was developed. Liquid lasers have the advantage over gases in that they provide much higher lasant densities and thus high-power densities. Liquids also have advantages over solids in that they have much higher damage thresholds and are much cheaper to produce for large scale applications. Among the liquid laser media that are potential candidates for solar pumping, the POC13: Nd sup 3+:ZrCl4 liquid was chosen for its high intrinsic efficiency and its relatively good stability against decomposition due to protic contamination.more » The development of a manufacturing procedure and performance testing of the laser liquid and the development of an inexpensive large solar concentrator to pump the laser are examined.« less

  2. Microwave-Assisted Synthesis of Highly-Crumpled, Few-Layered Graphene and Nitrogen-Doped Graphene for Use as High-Performance Electrodes in Capacitive Deionization

    NASA Astrophysics Data System (ADS)

    Amiri, Ahmad; Ahmadi, Goodarz; Shanbedi, Mehdi; Savari, Maryam; Kazi, S. N.; Chew, B. T.

    2015-12-01

    Capacitive deionization (CDI) is a promising procedure for removing various charged ionic species from brackish water. The performance of graphene-based material in capacitive deionization is lower than the expectation of the industry, so highly-crumpled, few-layered graphene (HCG) and highly-crumpled nitrogen-doped graphene (HCNDG) with high surface area have been introduced as promising candidates for CDI electrodes. Thus, HCG and HCNDG were prepared by exfoliation of graphite in the presence of liquid-phase, microwave-assisted methods. An industrially-scalable, cost-effective, and simple approach was employed to synthesize HCG and HCNDG, resulting in few-layered graphene and nitrogen-doped graphene with large specific surface area. Then, HCG and HCNDG were utilized for manufacturing a new class of carbon nanostructure-based electrodes for use in large-scale CDI equipment. The electrosorption results indicated that both the HCG and HCNDG have fairly large specific surface areas, indicating their huge potential for capacitive deionization applications.

  3. Information Tailoring Enhancements for Large-Scale Social Data

    DTIC Science & Technology

    2016-06-15

    Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale... Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER

  4. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  5. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  6. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  7. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  8. Continuous Purification of Colloidal Quantum Dots in Large-Scale Using Porous Electrodes in Flow Channel.

    PubMed

    Lim, Hosub; Woo, Ju Young; Lee, Doh C; Lee, Jinkee; Jeong, Sohee; Kim, Duckjong

    2017-02-27

    Colloidal quantum dots (QDs) afford huge potential in numerous applications owing to their excellent optical and electronic properties. After the synthesis of QDs, separating QDs from unreacted impurities in large scale is one of the biggest issues to achieve scalable and high performance optoelectronic applications. Thus far, however, continuous purification method, which is essential for mass production, has rarely been reported. In this study, we developed a new continuous purification process that is suitable to the mass production of high-quality QDs. As-synthesized QDs are driven by electrophoresis in a flow channel and captured by porous electrodes and finally separated from the unreacted impurities. Nuclear magnetic resonance and ultraviolet/visible/near-infrared absorption spectroscopic data clearly showed that the impurities were efficiently removed from QDs with the purification yield, defined as the ratio of the mass of purified QDs to that of QDs in the crude solution, up to 87%. Also, we could successfully predict the purification yield depending on purification conditions with a simple theoretical model. The proposed large-scale purification process could be an important cornerstone for the mass production and industrial use of high-quality QDs.

  9. Continuous Purification of Colloidal Quantum Dots in Large-Scale Using Porous Electrodes in Flow Channel

    NASA Astrophysics Data System (ADS)

    Lim, Hosub; Woo, Ju Young; Lee, Doh Chang; Lee, Jinkee; Jeong, Sohee; Kim, Duckjong

    2017-11-01

    Colloidal Quantum dots (QDs) afford huge potential in numerous applications owing to their excellent optical and electronic properties. After the synthesis of QDs, separating QDs from unreacted impurities in large scale is one of the biggest issues to achieve scalable and high performance optoelectronic applications. Thus far, however, continuous purification method, which is essential for mass production, has rarely been reported. In this study, we developed a new continuous purification process that is suitable to the mass production of high-quality QDs. As-synthesized QDs are driven by electrophoresis in a flow channel and captured by porous electrodes and finally separated from the unreacted impurities. Nuclear magnetic resonance and ultraviolet/visible/near-infrared absorption spectroscopic data clearly showed that the impurities were efficiently removed from QDs with the purification yield, defined as the ratio of the mass of purified QDs to that of QDs in the crude solution, up to 87%. Also, we could successfully predict the purification yield depending on purification conditions with a simple theoretical model. The proposed large-scale purification process could be an important cornerstone for the mass production and industrial use of high-quality QDs.

  10. Aerodynamic Design of a Dual-Flow Mach 7 Hypersonic Inlet System for a Turbine-Based Combined-Cycle Hypersonic Propulsion System

    NASA Technical Reports Server (NTRS)

    Sanders, Bobby W.; Weir, Lois J.

    2008-01-01

    A new hypersonic inlet for a turbine-based combined-cycle (TBCC) engine has been designed. This split-flow inlet is designed to provide flow to an over-under propulsion system with turbofan and dual-mode scramjet engines for flight from takeoff to Mach 7. It utilizes a variable-geometry ramp, high-speed cowl lip rotation, and a rotating low-speed cowl that serves as a splitter to divide the flow between the low-speed turbofan and the high-speed scramjet and to isolate the turbofan at high Mach numbers. The low-speed inlet was designed for Mach 4, the maximum mode transition Mach number. Integration of the Mach 4 inlet into the Mach 7 inlet imposed significant constraints on the low-speed inlet design, including a large amount of internal compression. The inlet design was used to develop mechanical designs for two inlet mode transition test models: small-scale (IMX) and large-scale (LIMX) research models. The large-scale model is designed to facilitate multi-phase testing including inlet mode transition and inlet performance assessment, controls development, and integrated systems testing with turbofan and scramjet engines.

  11. Large-Scale Advanced Prop-Fan (LAP) pitch change actuator and control design report

    NASA Technical Reports Server (NTRS)

    Schwartz, R. A.; Carvalho, P.; Cutler, M. J.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that the high inherent efficiency previously demonstrated by low speed turboprop propulsion systems may now be extended to today's higher speed aircraft if advanced high-speed propeller blades having thin airfoils and aerodynamic sweep are utilized. Hamilton Standard has designed a 9-foot diameter single-rotation Large-Scale Advanced Prop-Fan (LAP) which will be tested on a static test stand, in a high speed wind tunnel and on a research aircraft. The major objective of this testing is to establish the structural integrity of large-scale Prop-Fans of advanced construction in addition to the evaluation of aerodynamic performance and aeroacoustic design. This report describes the operation, design features and actual hardware of the (LAP) Prop-Fan pitch control system. The pitch control system which controls blade angle and propeller speed consists of two separate assemblies. The first is the control unit which provides the hydraulic supply, speed governing and feather function for the system. The second unit is the hydro-mechanical pitch change actuator which directly changes blade angle (pitch) as scheduled by the control.

  12. A procedural method for the efficient implementation of full-custom VLSI designs

    NASA Technical Reports Server (NTRS)

    Belk, P.; Hickey, N.

    1987-01-01

    An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.

  13. paraGSEA: a scalable approach for large-scale gene expression profiling

    PubMed Central

    Peng, Shaoliang; Yang, Shunyun

    2017-01-01

    Abstract More studies have been conducted using gene expression similarity to identify functional connections among genes, diseases and drugs. Gene Set Enrichment Analysis (GSEA) is a powerful analytical method for interpreting gene expression data. However, due to its enormous computational overhead in the estimation of significance level step and multiple hypothesis testing step, the computation scalability and efficiency are poor on large-scale datasets. We proposed paraGSEA for efficient large-scale transcriptome data analysis. By optimization, the overall time complexity of paraGSEA is reduced from O(mn) to O(m+n), where m is the length of the gene sets and n is the length of the gene expression profiles, which contributes more than 100-fold increase in performance compared with other popular GSEA implementations such as GSEA-P, SAM-GS and GSEA2. By further parallelization, a near-linear speed-up is gained on both workstations and clusters in an efficient manner with high scalability and performance on large-scale datasets. The analysis time of whole LINCS phase I dataset (GSE92742) was reduced to nearly half hour on a 1000 node cluster on Tianhe-2, or within 120 hours on a 96-core workstation. The source code of paraGSEA is licensed under the GPLv3 and available at http://github.com/ysycloud/paraGSEA. PMID:28973463

  14. Hydrogen Production from Nuclear Energy via High Temperature Electrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James E. O'Brien; Carl M. Stoots; J. Stephen Herring

    2006-04-01

    This paper presents the technical case for high-temperature nuclear hydrogen production. A general thermodynamic analysis of hydrogen production based on high-temperature thermal water splitting processes is presented. Specific details of hydrogen production based on high-temperature electrolysis are also provided, including results of recent experiments performed at the Idaho National Laboratory. Based on these results, high-temperature electrolysis appears to be a promising technology for efficient large-scale hydrogen production.

  15. The impact of pH inhomogeneities on CHO cell physiology and fed-batch process performance - two-compartment scale-down modelling and intracellular pH excursion.

    PubMed

    Brunner, Matthias; Braun, Philipp; Doppler, Philipp; Posch, Christoph; Behrens, Dirk; Herwig, Christoph; Fricke, Jens

    2017-07-01

    Due to high mixing times and base addition from top of the vessel, pH inhomogeneities are most likely to occur during large-scale mammalian processes. The goal of this study was to set-up a scale-down model of a 10-12 m 3 stirred tank bioreactor and to investigate the effect of pH perturbations on CHO cell physiology and process performance. Short-term changes in extracellular pH are hypothesized to affect intracellular pH and thus cell physiology. Therefore, batch fermentations, including pH shifts to 9.0 and 7.8, in regular one-compartment systems are conducted. The short-term adaption of the cells intracellular pH are showed an immediate increase due to elevated extracellular pH. With this basis of fundamental knowledge, a two-compartment system is established which is capable of simulating defined pH inhomogeneities. In contrast to state-of-the-art literature, the scale-down model is included parameters (e.g. volume of the inhomogeneous zone) as they might occur during large-scale processes. pH inhomogeneity studies in the two-compartment system are performed with simulation of temporary pH zones of pH 9.0. The specific growth rate especially during the exponential growth phase is strongly affected resulting in a decreased maximum viable cell density and final product titer. The gathered results indicate that even short-term exposure of cells to elevated pH values during large-scale processes can affect cell physiology and overall process performance. In particular, it could be shown for the first time that pH perturbations, which might occur during the early process phase, have to be considered in scale-down models of mammalian processes. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    DOE PAGES

    Ceurvorst, L.; Savin, A.; Ratan, N.; ...

    2018-04-20

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less

  17. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceurvorst, L.; Savin, A.; Ratan, N.

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less

  18. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE PAGES

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  19. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  20. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  1. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    PubMed

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  2. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  3. Simulating neural systems with Xyce.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiek, Richard Louis; Thornquist, Heidi K.; Mei, Ting

    2012-12-01

    Sandias parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.

  4. Static analysis techniques for semiautomatic synthesis of message passing software skeletons

    DOE PAGES

    Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...

    2015-06-29

    The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less

  5. Externally induced frontoparietal synchronization modulates network dynamics and enhances working memory performance.

    PubMed

    Violante, Ines R; Li, Lucia M; Carmichael, David W; Lorenz, Romy; Leech, Robert; Hampshire, Adam; Rothwell, John C; Sharp, David J

    2017-03-14

    Cognitive functions such as working memory (WM) are emergent properties of large-scale network interactions. Synchronisation of oscillatory activity might contribute to WM by enabling the coordination of long-range processes. However, causal evidence for the way oscillatory activity shapes network dynamics and behavior in humans is limited. Here we applied transcranial alternating current stimulation (tACS) to exogenously modulate oscillatory activity in a right frontoparietal network that supports WM. Externally induced synchronization improved performance when cognitive demands were high. Simultaneously collected fMRI data reveals tACS effects dependent on the relative phase of the stimulation and the internal cognitive processing state. Specifically, synchronous tACS during the verbal WM task increased parietal activity, which correlated with behavioral performance. Furthermore, functional connectivity results indicate that the relative phase of frontoparietal stimulation influences information flow within the WM network. Overall, our findings demonstrate a link between behavioral performance in a demanding WM task and large-scale brain synchronization.

  6. Externally induced frontoparietal synchronization modulates network dynamics and enhances working memory performance

    PubMed Central

    Violante, Ines R; Li, Lucia M; Carmichael, David W; Lorenz, Romy; Leech, Robert; Hampshire, Adam; Rothwell, John C; Sharp, David J

    2017-01-01

    Cognitive functions such as working memory (WM) are emergent properties of large-scale network interactions. Synchronisation of oscillatory activity might contribute to WM by enabling the coordination of long-range processes. However, causal evidence for the way oscillatory activity shapes network dynamics and behavior in humans is limited. Here we applied transcranial alternating current stimulation (tACS) to exogenously modulate oscillatory activity in a right frontoparietal network that supports WM. Externally induced synchronization improved performance when cognitive demands were high. Simultaneously collected fMRI data reveals tACS effects dependent on the relative phase of the stimulation and the internal cognitive processing state. Specifically, synchronous tACS during the verbal WM task increased parietal activity, which correlated with behavioral performance. Furthermore, functional connectivity results indicate that the relative phase of frontoparietal stimulation influences information flow within the WM network. Overall, our findings demonstrate a link between behavioral performance in a demanding WM task and large-scale brain synchronization. DOI: http://dx.doi.org/10.7554/eLife.22001.001 PMID:28288700

  7. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: Overview and Air-side System Description

    NASA Technical Reports Server (NTRS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; hide

    2016-01-01

    This work presents an overview of the This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes., a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  8. Production of fullerenes with concentrated solar flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, M. J.; Fields, C.; Lewandowski, A.

    1994-01-01

    Research at the National Renewable Energy Laboratory (NREL) has demonstrated that fullerenes can be produced using highly concentrated sunlight from a solar furnace. Since they were first synthesized in 1989, fullerenes have been the subject of intense research. They show considerable commercial potential in advanced materials and have potential applications that include semiconductors, superconductors, high-performance metals, and medical technologies. The most common fullerene is C{sub 60}, which is a molecule with a geometry resembling a soccer ball. Graphite vaporization methods such as pulsed-laser vaporization, resistive heating, and carbon arc have been used to produce fullerenes. None of these, however, seemsmore » capable of producing fullerenes economically on a large scale. The use of concentrated sunlight may help avoid the scale-up limitations inherent in more established production processes. Recently, researchers at NREL made fullerenes in NREL`s 10 kW High Flux Solar Furnace (HFSF) with a vacuum reaction chamber designed to deliver a solar flux of 1200 W/cm{sup 2} to a graphite pellet. Analysis of the resulting carbon soot by mass spectrometry and high-pressure liquid chromatography confirmed the existence of fullerenes. These results are very encouraging and we are optimistic that concentrated solar flux can provide a means for large-scale, economical production of fullerenes. This paper presents our method, experimental apparatus, and results of fullerene production research performed with the HFSF.« less

  9. Examiners and Content and Site: Oh My! a National Organization's Investigation of Score Variation in Large-Scale Performance Assessments

    ERIC Educational Resources Information Center

    Sebok, Stefanie S.; Roy, Marguerite; Klinger, Don A.; De Champlain, André F.

    2015-01-01

    Examiner effects and content specificity are two well known sources of construct irrelevant variance that present great challenges in performance-based assessments. National medical organizations that are responsible for large-scale performance based assessments experience an additional challenge as they are responsible for administering…

  10. A 100,000 Scale Factor Radar Range.

    PubMed

    Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser

    2017-12-19

    The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.

  11. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  12. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  13. 1 million-Q optomechanical microdisk resonators for sensing with very large scale integration

    NASA Astrophysics Data System (ADS)

    Hermouet, M.; Sansa, M.; Banniard, L.; Fafin, A.; Gely, M.; Allain, P. E.; Santos, E. Gil; Favero, I.; Alava, T.; Jourdan, G.; Hentz, S.

    2018-02-01

    Cavity optomechanics have become a promising route towards the development of ultrasensitive sensors for a wide range of applications including mass, chemical and biological sensing. In this study, we demonstrate the potential of Very Large Scale Integration (VLSI) with state-of-the-art low-loss performance silicon optomechanical microdisks for sensing applications. We report microdisks exhibiting optical Whispering Gallery Modes (WGM) with 1 million quality factors, yielding high displacement sensitivity and strong coupling between optical WGMs and in-plane mechanical Radial Breathing Modes (RBM). Such high-Q microdisks with mechanical resonance frequencies in the 102 MHz range were fabricated on 200 mm wafers with Variable Shape Electron Beam lithography. Benefiting from ultrasensitive readout, their Brownian motion could be resolved with good Signal-to-Noise ratio at ambient pressure, as well as in liquid, despite high frequency operation and large fluidic damping: the mechanical quality factor reduced from few 103 in air to 10's in liquid, and the mechanical resonance frequency shifted down by a few percent. Proceeding one step further, we performed an all-optical operation of the resonators in air using a pump-probe scheme. Our results show our VLSI process is a viable approach for the next generation of sensors operating in vacuum, gas or liquid phase.

  14. Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models

    PubMed Central

    Prinold, Joe A.I.; Bull, Anthony M.J.

    2014-01-01

    Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621

  15. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  16. Optical metasurfaces for high angle steering at visible wavelengths

    DOE PAGES

    Lin, Dianmin; Melli, Mauro; Poliakov, Evgeni; ...

    2017-05-23

    Metasurfaces have facilitated the replacement of conventional optical elements with ultrathin and planar photonic structures. Previous designs of metasurfaces were limited to small deflection angles and small ranges of the angle of incidence. Here, we have created two types of Si-based metasurfaces to steer visible light to a large deflection angle. These structures exhibit high diffraction efficiencies over a broad range of angles of incidence. We have demonstrated metasurfaces working both in transmission and reflection modes based on conventional thin film silicon processes that are suitable for the large-scale fabrication of high-performance devices.

  17. Effects of local and large-scale climate patterns on estuarine resident fishes: The example of Pomatoschistus microps and Pomatoschistus minutus

    NASA Astrophysics Data System (ADS)

    Nyitrai, Daniel; Martinho, Filipe; Dolbeth, Marina; Rito, João; Pardal, Miguel A.

    2013-12-01

    Large-scale and local climate patterns are known to influence several aspects of the life cycle of marine fish. In this paper, we used a 9-year database (2003-2011) to analyse the populations of two estuarine resident fishes, Pomatoschistus microps and Pomatoschistus minutus, in order to determine their relationships with varying environmental stressors operating over local and large scales. This study was performed in the Mondego estuary, Portugal. Firstly, the variations in abundance, growth, population structure and secondary production were evaluated. These species appeared in high densities in the beginning of the study period, with subsequent occasional high annual density peaks, while their secondary production was lower in dry years. The relationships between yearly fish abundance and the environmental variables were evaluated separately for both species using Spearman correlation analysis, considering the yearly abundance peaks for the whole population, juveniles and adults. Among the local climate patterns, precipitation, river runoff, salinity and temperature were used in the analyses, and North Atlantic Oscillation (NAO) index and sea surface temperature (SST) were tested as large-scale factors. For P. microps, precipitation and NAO were the significant factors explaining abundance of the whole population, the adults and the juveniles as well. Regarding P. minutus, for the whole population, juveniles and adults river runoff was the significant predictor. The results for both species suggest a differential influence of climate patterns on the various life cycle stages, confirming also the importance of estuarine resident fishes as indicators of changes in local and large-scale climate patterns, related to global climate change.

  18. Really Large Scale Computer Graphic Projection Using Lasers and Laser Substitutes

    NASA Astrophysics Data System (ADS)

    Rother, Paul

    1989-07-01

    This paper reflects on past laser projects to display vector scanned computer graphic images onto very large and irregular surfaces. Since the availability of microprocessors and high powered visible lasers, very large scale computer graphics projection have become a reality. Due to the independence from a focusing lens, lasers easily project onto distant and irregular surfaces and have been used for amusement parks, theatrical performances, concert performances, industrial trade shows and dance clubs. Lasers have been used to project onto mountains, buildings, 360° globes, clouds of smoke and water. These methods have proven successful in installations at: Epcot Theme Park in Florida; Stone Mountain Park in Georgia; 1984 Olympics in Los Angeles; hundreds of Corporate trade shows and thousands of musical performances. Using new ColorRayTM technology, the use of costly and fragile lasers is no longer necessary. Utilizing fiber optic technology, the functionality of lasers can be duplicated for new and exciting projection possibilities. The use of ColorRayTM technology has enjoyed worldwide recognition in conjunction with Pink Floyd and George Michaels' world wide tours.

  19. The LAMAR: A high throughput X-ray astronomy facility for a moderate cost mission

    NASA Technical Reports Server (NTRS)

    Gorenstein, P.; Schwartz, D.

    1981-01-01

    The performance of a large area modular array of reflectors (LAMAR) is considered in several hypothetical observations relevant to: (1) cosmology, the X-ray background, and large scale structure of the universe; (2) clusters of galaxies and their evolution; (3) quasars and other active galactic nuclei; (4) compact objects in our galaxy; (5) stellar coronae; and (6) energy input to the interstellar medium.

  20. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  1. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  2. Single cell versus large population analysis: cell variability in elemental intracellular concentration and distribution.

    PubMed

    Malucelli, Emil; Procopio, Alessandra; Fratini, Michela; Gianoncelli, Alessandra; Notargiacomo, Andrea; Merolle, Lucia; Sargenti, Azzurra; Castiglioni, Sara; Cappadone, Concettina; Farruggia, Giovanna; Lombardo, Marco; Lagomarsino, Stefano; Maier, Jeanette A; Iotti, Stefano

    2018-01-01

    The quantification of elemental concentration in cells is usually performed by analytical assays on large populations missing peculiar but important rare cells. The present article aims at comparing the elemental quantification in single cells and cell population in three different cell types using a new approach for single cells elemental analysis performed at sub-micrometer scale combining X-ray fluorescence microscopy and atomic force microscopy. The attention is focused on the light element Mg, exploiting the opportunity to compare the single cell quantification to the cell population analysis carried out by a highly Mg-selective fluorescent chemosensor. The results show that the single cell analysis reveals the same Mg differences found in large population of the different cell strains studied. However, in one of the cell strains, single cell analysis reveals two cells with an exceptionally high intracellular Mg content compared with the other cells of the same strain. The single cell analysis allows mapping Mg and other light elements in whole cells at sub-micrometer scale. A detailed intensity correlation analysis on the two cells with the highest Mg content reveals that Mg subcellular localization correlates with oxygen in a different fashion with respect the other sister cells of the same strain. Graphical abstract Single cells or large population analysis this is the question!

  3. Landscape Characterization of Arctic Ecosystems Using Data Mining Algorithms and Large Geospatial Datasets

    NASA Astrophysics Data System (ADS)

    Langford, Z. L.; Kumar, J.; Hoffman, F. M.

    2015-12-01

    Observations indicate that over the past several decades, landscape processes in the Arctic have been changing or intensifying. A dynamic Arctic landscape has the potential to alter ecosystems across a broad range of scales. Accurate characterization is useful to understand the properties and organization of the landscape, optimal sampling network design, measurement and process upscaling and to establish a landscape-based framework for multi-scale modeling of ecosystem processes. This study seeks to delineate the landscape at Seward Peninsula of Alaska into ecoregions using large volumes (terabytes) of high spatial resolution satellite remote-sensing data. Defining high-resolution ecoregion boundaries is difficult because many ecosystem processes in Arctic ecosystems occur at small local to regional scales, which are often resolved in by coarse resolution satellites (e.g., MODIS). We seek to use data-fusion techniques and data analytics algorithms applied to Phased Array type L-band Synthetic Aperture Radar (PALSAR), Interferometric Synthetic Aperture Radar (IFSAR), Satellite for Observation of Earth (SPOT), WorldView-2, WorldView-3, and QuickBird-2 to develop high-resolution (˜5m) ecoregion maps for multiple time periods. Traditional analysis methods and algorithms are insufficient for analyzing and synthesizing such large geospatial data sets, and those algorithms rarely scale out onto large distributed- memory parallel computer systems. We seek to develop computationally efficient algorithms and techniques using high-performance computing for characterization of Arctic landscapes. We will apply a variety of data analytics algorithms, such as cluster analysis, complex object-based image analysis (COBIA), and neural networks. We also propose to use representativeness analysis within the Seward Peninsula domain to determine optimal sampling locations for fine-scale measurements. This methodology should provide an initial framework for analyzing dynamic landscape trends in Arctic ecosystems, such as shrubification and disturbances, and integration of ecoregions into multi-scale models.

  4. Wafer-size free-standing single-crystalline graphene device arrays

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jing, Gaoshan; Zhang, Bo; Sando, Shota; Cui, Tianhong

    2014-08-01

    We report an approach of wafer-scale addressable single-crystalline graphene (SCG) arrays growth by using pre-patterned seeds to control the nucleation. The growth mechanism and superb properties of SCG were studied. Large array of free-standing SCG devices were realized. Characterization of SCG as nano switches shows excellent performance with life time (>22 000 times) two orders longer than that of other graphene nano switches reported so far. This work not only shows the possibility of producing wafer-scale high quality SCG device arrays but also explores the superb performance of SCG as nano devices.

  5. Biotic homogenization can decrease landscape-scale forest multifunctionality.

    PubMed

    van der Plas, Fons; Manning, Pete; Soliveres, Santiago; Allan, Eric; Scherer-Lorenzen, Michael; Verheyen, Kris; Wirth, Christian; Zavala, Miguel A; Ampoorter, Evy; Baeten, Lander; Barbaro, Luc; Bauhus, Jürgen; Benavides, Raquel; Benneter, Adam; Bonal, Damien; Bouriaud, Olivier; Bruelheide, Helge; Bussotti, Filippo; Carnol, Monique; Castagneyrol, Bastien; Charbonnier, Yohan; Coomes, David Anthony; Coppi, Andrea; Bastias, Cristina C; Dawud, Seid Muhie; De Wandeler, Hans; Domisch, Timo; Finér, Leena; Gessler, Arthur; Granier, André; Grossiord, Charlotte; Guyot, Virginie; Hättenschwiler, Stephan; Jactel, Hervé; Jaroszewicz, Bogdan; Joly, François-Xavier; Jucker, Tommaso; Koricheva, Julia; Milligan, Harriet; Mueller, Sandra; Muys, Bart; Nguyen, Diem; Pollastrini, Martina; Ratcliffe, Sophia; Raulund-Rasmussen, Karsten; Selvi, Federico; Stenlid, Jan; Valladares, Fernando; Vesterdal, Lars; Zielínski, Dawid; Fischer, Markus

    2016-03-29

    Many experiments have shown that local biodiversity loss impairs the ability of ecosystems to maintain multiple ecosystem functions at high levels (multifunctionality). In contrast, the role of biodiversity in driving ecosystem multifunctionality at landscape scales remains unresolved. We used a comprehensive pan-European dataset, including 16 ecosystem functions measured in 209 forest plots across six European countries, and performed simulations to investigate how local plot-scale richness of tree species (α-diversity) and their turnover between plots (β-diversity) are related to landscape-scale multifunctionality. After accounting for variation in environmental conditions, we found that relationships between α-diversity and landscape-scale multifunctionality varied from positive to negative depending on the multifunctionality metric used. In contrast, when significant, relationships between β-diversity and landscape-scale multifunctionality were always positive, because a high spatial turnover in species composition was closely related to a high spatial turnover in functions that were supported at high levels. Our findings have major implications for forest management and indicate that biotic homogenization can have previously unrecognized and negative consequences for large-scale ecosystem multifunctionality.

  6. Biotic homogenization can decrease landscape-scale forest multifunctionality

    PubMed Central

    van der Plas, Fons; Manning, Pete; Soliveres, Santiago; Allan, Eric; Scherer-Lorenzen, Michael; Verheyen, Kris; Wirth, Christian; Zavala, Miguel A.; Ampoorter, Evy; Baeten, Lander; Barbaro, Luc; Bauhus, Jürgen; Benavides, Raquel; Benneter, Adam; Bonal, Damien; Bouriaud, Olivier; Bruelheide, Helge; Bussotti, Filippo; Carnol, Monique; Castagneyrol, Bastien; Charbonnier, Yohan; Coppi, Andrea; Bastias, Cristina C.; Dawud, Seid Muhie; De Wandeler, Hans; Domisch, Timo; Finér, Leena; Granier, André; Grossiord, Charlotte; Guyot, Virginie; Hättenschwiler, Stephan; Jactel, Hervé; Jaroszewicz, Bogdan; Joly, François-xavier; Jucker, Tommaso; Koricheva, Julia; Milligan, Harriet; Mueller, Sandra; Muys, Bart; Nguyen, Diem; Pollastrini, Martina; Ratcliffe, Sophia; Raulund-Rasmussen, Karsten; Selvi, Federico; Stenlid, Jan; Valladares, Fernando; Vesterdal, Lars; Zielínski, Dawid; Fischer, Markus

    2016-01-01

    Many experiments have shown that local biodiversity loss impairs the ability of ecosystems to maintain multiple ecosystem functions at high levels (multifunctionality). In contrast, the role of biodiversity in driving ecosystem multifunctionality at landscape scales remains unresolved. We used a comprehensive pan-European dataset, including 16 ecosystem functions measured in 209 forest plots across six European countries, and performed simulations to investigate how local plot-scale richness of tree species (α-diversity) and their turnover between plots (β-diversity) are related to landscape-scale multifunctionality. After accounting for variation in environmental conditions, we found that relationships between α-diversity and landscape-scale multifunctionality varied from positive to negative depending on the multifunctionality metric used. In contrast, when significant, relationships between β-diversity and landscape-scale multifunctionality were always positive, because a high spatial turnover in species composition was closely related to a high spatial turnover in functions that were supported at high levels. Our findings have major implications for forest management and indicate that biotic homogenization can have previously unrecognized and negative consequences for large-scale ecosystem multifunctionality. PMID:26979952

  7. Manganese oxides-based composite electrodes for supercapacitors

    NASA Astrophysics Data System (ADS)

    Su, Dongyun; Ma, Jun; Huang, Mingyu; Liu, Feng; Chen, Taizhou; Liu, Chao; Ni, Hongjun

    2017-06-01

    In recent, nanostructured transition metal oxides as a new class of energy storage materials have widely attracted attention due to its excellent electrochemical performance for supercapacitors. The MnO2 based transition metal oxides and their composite electrode materials were focused in the review for supercapacitor applications. The researches on different nanostructures of manganese oxides such as Nano rods, Nano sheets, nanowires, nanotubes and so on have been discovered in recent years, together with brief explanations of their properties. Research on enhancing materials’ properties by designing combination of different materials on the micron or Nano scale is too limited, and therefore we discuss the effects of different components’ sizes and their synergy on the performance. Moreover, the low-cost and large-scale fabrication of flexible supercapacitors with high performance (high energy density and cycle stability) have been pointed out and studied.

  8. Does Instructional Format Really Matter? Cognitive Load Theory, Multimedia and Teaching English Literature

    ERIC Educational Resources Information Center

    Martin, Stewart

    2012-01-01

    This article reports a quasi-experimental study on the effects of multimedia teaching and learning in English Literature--a subject which places high cognitive load on students. A large-scale study was conducted in 4 high-achieving secondary schools to examine the differences made to students' learning and performance by the use of multimedia and…

  9. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  10. Continuous Flow Polymer Synthesis toward Reproducible Large-Scale Production for Efficient Bulk Heterojunction Organic Solar Cells.

    PubMed

    Pirotte, Geert; Kesters, Jurgen; Verstappen, Pieter; Govaerts, Sanne; Manca, Jean; Lutsen, Laurence; Vanderzande, Dirk; Maes, Wouter

    2015-10-12

    Organic photovoltaics (OPV) have attracted great interest as a solar cell technology with appealing mechanical, aesthetical, and economies-of-scale features. To drive OPV toward economic viability, low-cost, large-scale module production has to be realized in combination with increased top-quality material availability and minimal batch-to-batch variation. To this extent, continuous flow chemistry can serve as a powerful tool. In this contribution, a flow protocol is optimized for the high performance benzodithiophene-thienopyrroledione copolymer PBDTTPD and the material quality is probed through systematic solar-cell evaluation. A stepwise approach is adopted to turn the batch process into a reproducible and scalable continuous flow procedure. Solar cell devices fabricated using the obtained polymer batches deliver an average power conversion efficiency of 7.2 %. Upon incorporation of an ionic polythiophene-based cathodic interlayer, the photovoltaic performance could be enhanced to a maximum efficiency of 9.1 %. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. High-efficiency nanostructured silicon solar cells on a large scale realized through the suppression of recombination channels.

    PubMed

    Zhong, Sihua; Huang, Zengguang; Lin, Xingxing; Zeng, Yang; Ma, Yechi; Shen, Wenzhong

    2015-01-21

    Nanostructured silicon solar cells show great potential for new-generation photovoltaics due to their ability to approach ideal light-trapping. However, the nanofeatured morphology that brings about the optical benefits also introduces new recombination channels, and severe deterioration in the electrical performance even outweighs the gain in optics in most attempts. This Research News article aims to review the recent progress in the suppression of carrier recombination in silicon nanostructures, with the emphasis on the optimization of surface morphology and controllable nanostructure height and emitter doping concentration, as well as application of dielectric passivation coatings, providing design rules to realize high-efficiency nanostructured silicon solar cells on a large scale. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A high-performance dual-scale porous electrode for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    Zhou, X. L.; Zeng, Y. K.; Zhu, X. B.; Wei, L.; Zhao, T. S.

    2016-09-01

    In this work, we present a simple and cost-effective method to form a dual-scale porous electrode by KOH activation of the fibers of carbon papers. The large pores (∼10 μm), formed between carbon fibers, serve as the macroscopic pathways for high electrolyte flow rates, while the small pores (∼5 nm), formed on carbon fiber surfaces, act as active sites for rapid electrochemical reactions. It is shown that the Brunauer-Emmett-Teller specific surface area of the carbon paper is increased by a factor of 16 while maintaining the same hydraulic permeability as that of the original carbon paper electrode. We then apply the dual-scale electrode to a vanadium redox flow battery (VRFB) and demonstrate an energy efficiency ranging from 82% to 88% at current densities of 200-400 mA cm-2, which is record breaking as the highest performance of VRFB in the open literature.

  13. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  14. High power Nb-doped LiFePO4 Li-ion battery cathodes; pilot-scale synthesis and electrochemical properties

    NASA Astrophysics Data System (ADS)

    Johnson, Ian D.; Blagovidova, Ekaterina; Dingwall, Paul A.; Brett, Dan J. L.; Shearing, Paul R.; Darr, Jawwad A.

    2016-09-01

    High power, phase-pure Nb-doped LiFePO4 (LFP) nanoparticles are synthesised using a pilot-scale continuous hydrothermal flow synthesis process (production rate of 6 kg per day) in the range 0.01-2.00 at% Nb with respect to total transition metal content. EDS analysis suggests that Nb is homogeneously distributed throughout the structure. The addition of fructose as a reagent in the hydrothermal flow process, followed by a post synthesis heat-treatment, affords a continuous graphitic carbon coating on the particle surfaces. Electrochemical testing reveals that cycling performance improves with increasing dopant concentration, up to a maximum of 1.0 at% Nb, for which point a specific capacity of 110 mAh g-1 is obtained at 10 C (6 min for the charge or discharge). This is an excellent result for a high power cathode LFP based material, particularly when considering the synthesis was performed on a large pilot-scale apparatus.

  15. Measurement-Driven Characterization of the Mobile Environment

    ERIC Educational Resources Information Center

    Soroush, Hamed

    2013-01-01

    The concurrent deployment of high-quality wireless networks and large-scale cloud services offers the promise of secure ubiquitous access to seemingly limitless amount of content. However, as users' expectations have grown more demanding, the performance and connectivity failures endemic to the existing networking infrastructure have become more…

  16. Statistical machine translation for biomedical text: are we there yet?

    PubMed

    Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre

    2011-01-01

    In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.

  17. High-resolution Observations of Hα Spectra with a Subtractive Double Pass

    NASA Astrophysics Data System (ADS)

    Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.

    2018-02-01

    High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.

  18. Facile Synthesis of Layer Structured GeP3/C with Stable Chemical Bonding for Enhanced Lithium-Ion Storage

    NASA Astrophysics Data System (ADS)

    Qi, Wen; Zhao, Haihua; Wu, Ying; Zeng, Hong; Tao, Tao; Chen, Chao; Kuang, Chunjiang; Zhou, Shaoxiong; Huang, Yunhui

    2017-02-01

    Recently, metal phosphides have been investigated as potential anode materials because of higher specific capacity compared with those of carbonaceous materials. However, the rapid capacity fade upon cycling leads to poor durability and short cycle life, which cannot meet the need of lithium-ion batteries with high energy density. Herein, we report a layer-structured GeP3/C nanocomposite anode material with high performance prepared by a facial and large-scale ball milling method via in-situ mechanical reaction. The P-O-C bonds are formed in the composite, leading to close contact between GeP3 and carbon. As a result, the GeP3/C anode displays excellent lithium storage performance with a high reversible capacity up to 1109 mA h g-1 after 130 cycles at a current density of 0.1 A g-1. Even at high current densities of 2 and 5 A g-1, the reversible capacities are still as high as 590 and 425 mA h g-1, respectively. This suggests that the GeP3/C composite is promising to achieve high-energy lithium-ion batteries and the mechanical milling is an efficient method to fabricate such composite electrode materials especially for large-scale application.

  19. Validating the simulation of large-scale parallel applications using statistical characteristics

    DOE PAGES

    Zhang, Deli; Wilke, Jeremiah; Hendry, Gilbert; ...

    2016-03-01

    Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodologymore » and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Lastly, our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.« less

  20. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.

    PubMed

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2013-11-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.

  1. Attributes and Behaviors of Performance-Centered Systems.

    ERIC Educational Resources Information Center

    Gery, Gloria

    1995-01-01

    Examines attributes, characteristics, and behaviors of performance-centered software packages that are emerging in the consumer software marketplace and compares them with large-scale systems software being designed by internal information systems staffs and vendors of large-scale software designed for financial, manufacturing, processing, and…

  2. Large scale, highly conductive and patterned transparent films of silver nanowires on arbitrary substrates and their application in touch screens

    NASA Astrophysics Data System (ADS)

    Madaria, Anuj R.; Kumar, Akshay; Zhou, Chongwu

    2011-06-01

    The application of silver nanowire films as transparent conductive electrodes has shown promising results recently. In this paper, we demonstrate the application of a simple spray coating technique to obtain large scale, highly uniform and conductive silver nanowire films on arbitrary substrates. We also integrated a polydimethylsiloxane (PDMS)-assisted contact transfer technique with spray coating, which allowed us to obtain large scale high quality patterned films of silver nanowires. The transparency and conductivity of the films was controlled by the volume of the dispersion used in spraying and the substrate area. We note that the optoelectrical property, σDC/σOp, for various films fabricated was in the range 75-350, which is extremely high for transparent thin film compared to other candidate alternatives to doped metal oxide film. Using this method, we obtain silver nanowire films on a flexible polyethylene terephthalate (PET) substrate with a transparency of 85% and sheet resistance of 33 Ω/sq, which is comparable to that of tin-doped indium oxide (ITO) on flexible substrates. In-depth analysis of the film shows a high performance using another commonly used figure-of-merit, ΦTE. Also, Ag nanowire film/PET shows good mechanical flexibility and the application of such a conductive silver nanowire film as an electrode in a touch panel has been demonstrated.

  3. Channeling of multikilojoule high-intensity laser beams in an inhomogeneous plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivancic, S.; Haberberger, D.; Habara, H.

    Channeling experiments were performed that demonstrate the transport of high-intensity (>10¹⁸ W/cm²), multikilojoule laser light through a millimeter-sized, inhomogeneous (~300-μm density scale length) laser produced plasma up to overcritical density, which is an important step forward for the fast-ignition concept. The background plasma density and the density depression inside the channel were characterized with a novel optical probe system. The channel progression velocity was measured, which agrees well with theoretical predictions based on large scale particle-in-cell simulations, confirming scaling laws for the required channeling laser energy and laser pulse duration, which are important parameters for future integrated fast-ignition channeling experiments.

  4. Printable nanostructured silicon solar cells for high-performance, large-area flexible photovoltaics.

    PubMed

    Lee, Sung-Min; Biswas, Roshni; Li, Weigu; Kang, Dongseok; Chan, Lesley; Yoon, Jongseung

    2014-10-28

    Nanostructured forms of crystalline silicon represent an attractive materials building block for photovoltaics due to their potential benefits to significantly reduce the consumption of active materials, relax the requirement of materials purity for high performance, and hence achieve greatly improved levelized cost of energy. Despite successful demonstrations for their concepts over the past decade, however, the practical application of nanostructured silicon solar cells for large-scale implementation has been hampered by many existing challenges associated with the consumption of the entire wafer or expensive source materials, difficulties to precisely control materials properties and doping characteristics, or restrictions on substrate materials and scalability. Here we present a highly integrable materials platform of nanostructured silicon solar cells that can overcome these limitations. Ultrathin silicon solar microcells integrated with engineered photonic nanostructures are fabricated directly from wafer-based source materials in configurations that can lower the materials cost and can be compatible with deterministic assembly procedures to allow programmable, large-scale distribution, unlimited choices of module substrates, as well as lightweight, mechanically compliant constructions. Systematic studies on optical and electrical properties, photovoltaic performance in experiments, as well as numerical modeling elucidate important design rules for nanoscale photon management with ultrathin, nanostructured silicon solar cells and their interconnected, mechanically flexible modules, where we demonstrate 12.4% solar-to-electric energy conversion efficiency for printed ultrathin (∼ 8 μm) nanostructured silicon solar cells when configured with near-optimal designs of rear-surface nanoposts, antireflection coating, and back-surface reflector.

  5. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    NASA Astrophysics Data System (ADS)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.

  6. Large-scale wind-tunnel investigation of a close-coupled canard-delta-wing fighter model through high angles of attack

    NASA Technical Reports Server (NTRS)

    Stoll, F.; Koenig, D. G.

    1983-01-01

    Data obtained through very high angles of attack from a large-scale, subsonic wind-tunnel test of a close-coupled canard-delta-wing fighter model are analyzed. The canard delays wing leading-edge vortex breakdown, even for angles of attack at which the canard is completely stalled. A vortex-lattice method was applied which gave good predictions of lift and pitching moment up to an angle of attack of about 20 deg, where vortex-breakdown effects on performance become significant. Pitch-control inputs generally retain full effectiveness up to the angle of attack of maximum lift, beyond which, effectiveness drops off rapidly. A high-angle-of-attack prediction method gives good estimates of lift and drag for the completely stalled aircraft. Roll asymmetry observed at zero sideslip is apparently caused by an asymmetry in the model support structure.

  7. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  8. Experimental Investigation of a Large-Scale Low-Boom Inlet Concept

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie M.; Chima, Rodrick V.; Vyas, Manan A.; Wayman, Thomas R.; Conners, Timothy R.; Reger, Robert W.

    2011-01-01

    A large-scale low-boom inlet concept was tested in the NASA Glenn Research Center 8- x 6- foot Supersonic Wind Tunnel. The purpose of this test was to assess inlet performance, stability and operability at various Mach numbers and angles of attack. During this effort, two models were tested: a dual stream inlet designed to mimic potential aircraft flight hardware integrating a high-flow bypass stream; and a single stream inlet designed to study a configuration with a zero-degree external cowl angle and to permit surface visualization of the vortex generator flow on the internal centerbody surface. During the course of the test, the low-boom inlet concept was demonstrated to have high recovery, excellent buzz margin, and high operability. This paper will provide an overview of the setup, show a brief comparison of the dual stream and single stream inlet results, and examine the dual stream inlet characteristics.

  9. Magnetic Doppler imaging of Ap stars

    NASA Astrophysics Data System (ADS)

    Silvester, J.; Wade, G. A.; Kochukhov, O.; Landstreet, J. D.; Bagnulo, S.

    2008-04-01

    Historically, the magnetic field geometries of the chemically peculiar Ap stars were modelled in the context of a simple dipole field. However, with the acquisition of increasingly sophisticated diagnostic data, it has become clear that the large-scale field topologies exhibit important departures from this simple model. Recently, new high-resolution circular and linear polarisation spectroscopy has even hinted at the presence of strong, small-scale field structures, which were completely unexpected based on earlier modelling. This project investigates the detailed structure of these strong fossil magnetic fields, in particular the large-scale field geometry, as well as small scale magnetic structures, by mapping the magnetic and chemical surface structure of a selected sample of Ap stars. These maps will be used to investigate the relationship between the local field vector and local surface chemistry, looking for the influence the field may have on the various chemical transport mechanisms (i.e., diffusion, convection and mass loss). This will lead to better constraints on the origin and evolution, as well as refining the magnetic field model for Ap stars. Mapping will be performed using high resolution and signal-to-noise ratio time-series of spectra in both circular and linear polarisation obtained using the new-generation ESPaDOnS (CFHT, Mauna Kea, Hawaii) and NARVAL spectropolarimeters (Pic du Midi Observatory). With these data we will perform tomographic inversion of Doppler-broadened Stokes IQUV Zeeman profiles of a large variety of spectral lines using the INVERS10 magnetic Doppler imaging code, simultaneously recovering the detailed surface maps of the vector magnetic field and chemical abundances.

  10. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  11. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  12. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures

    NASA Astrophysics Data System (ADS)

    Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A.; Park, Jiwoong

    2017-10-01

    High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides--which represent one- and three-atom-thick two-dimensional building blocks, respectively--have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.

  13. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures.

    PubMed

    Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A; Park, Jiwoong

    2017-10-12

    High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides-which represent one- and three-atom-thick two-dimensional building blocks, respectively-have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.

  14. High Performance Nano-Crystalline Oxide Fuel Cell Materials. Defects, Structures, Interfaces, Transport, and Electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, Scott; Poeppelmeier, Ken; Mason, Tom

    This project addresses fundamental materials challenges in solid oxide electrochemical cells, devices that have a broad range of important energy applications. Although nano-scale mixed ionically and electronically conducting (MIEC) materials provide an important opportunity to improve performance and reduce device operating temperature, durability issues threaten to limit their utility and have remained largely unexplored. Our work has focused on both (1) understanding the fundamental processes related to oxygen transport and surface-vapor reactions in nano-scale MIEC materials, and (2) determining and understanding the key factors that control their long-term stability. Furthermore, materials stability has been explored under the “extreme” conditions encounteredmore » in many solid oxide cell applications, i.e, very high or very low effective oxygen pressures, and high current density.« less

  15. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  16. Free Global Dsm Assessment on Large Scale Areas Exploiting the Potentialities of the Innovative Google Earth Engine Platform

    NASA Astrophysics Data System (ADS)

    Nascetti, A.; Di Rita, M.; Ravanelli, R.; Amicuzi, M.; Esposito, S.; Crespi, M.

    2017-05-01

    The high-performance cloud-computing platform Google Earth Engine has been developed for global-scale analysis based on the Earth observation data. In particular, in this work, the geometric accuracy of the two most used nearly-global free DSMs (SRTM and ASTER) has been evaluated on the territories of four American States (Colorado, Michigan, Nevada, Utah) and one Italian Region (Trentino Alto- Adige, Northern Italy) exploiting the potentiality of this platform. These are large areas characterized by different terrain morphology, land covers and slopes. The assessment has been performed using two different reference DSMs: the USGS National Elevation Dataset (NED) and a LiDAR acquisition. The DSMs accuracy has been evaluated through computation of standard statistic parameters, both at global scale (considering the whole State/Region) and in function of the terrain morphology using several slope classes. The geometric accuracy in terms of Standard deviation and NMAD, for SRTM range from 2-3 meters in the first slope class to about 45 meters in the last one, whereas for ASTER, the values range from 5-6 to 30 meters. In general, the performed analysis shows a better accuracy for the SRTM in the flat areas whereas the ASTER GDEM is more reliable in the steep areas, where the slopes increase. These preliminary results highlight the GEE potentialities to perform DSM assessment on a global scale.

  17. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  18. Impacts and Viability of Open Source Software on Earth Science Metadata Clearing House and Service Registry Applications

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Cechini, M. F.; Mitchell, A.

    2011-12-01

    Earth Science applications typically deal with large amounts of data and high throughput rates, if not also high transaction rates. While Open Source is frequently used for smaller scientific applications, large scale, highly available systems frequently fall back to "enterprise" class solutions like Oracle RAC or commercial grade JEE Application Servers. NASA's Earth Observing System Data and Information System (EOSDIS) provides end-to-end capabilities for managing NASA's Earth science data from multiple sources - satellites, aircraft, field measurements, and various other programs. A core capability of EOSDIS, the Earth Observing System (EOS) Clearinghouse (ECHO), is a highly available search and order clearinghouse of over 100 million pieces of science data that has evolved from its early R&D days to a fully operational system. Over the course of this maturity ECHO has largely transitioned from commercial frameworks, databases, and operating systems to Open Source solutions...and in some cases, back. In this talk we discuss the progression of our technological solutions and our lessons learned in the areas of: ? High performance, large scale searching solutions ? GeoSpatial search capabilities and dealing with multiple coordinate systems ? Search and storage of variable format source (science) data ? Highly available deployment solutions ? Scalable (elastic) solutions to visual searching and image handling Throughout the evolution of the ECHO system we have had to evaluate solutions with respect to performance, cost, developer productivity, reliability, and maintainability in the context of supporting global science users. Open Source solutions have played a significant role in our architecture and development but several critical commercial components remain (or have been reinserted) to meet our operational demands.

  19. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  20. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    NASA Astrophysics Data System (ADS)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  1. Effects of forcing time scale on the simulated turbulent flows and turbulent collision statistics of inertial particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosa, B., E-mail: bogdan.rosa@imgw.pl; Parishani, H.; Department of Earth System Science, University of California, Irvine, California 92697-3100

    2015-01-15

    In this paper, we study systematically the effects of forcing time scale in the large-scale stochastic forcing scheme of Eswaran and Pope [“An examination of forcing in direct numerical simulations of turbulence,” Comput. Fluids 16, 257 (1988)] on the simulated flow structures and statistics of forced turbulence. Using direct numerical simulations, we find that the forcing time scale affects the flow dissipation rate and flow Reynolds number. Other flow statistics can be predicted using the altered flow dissipation rate and flow Reynolds number, except when the forcing time scale is made unrealistically large to yield a Taylor microscale flow Reynoldsmore » number of 30 and less. We then study the effects of forcing time scale on the kinematic collision statistics of inertial particles. We show that the radial distribution function and the radial relative velocity may depend on the forcing time scale when it becomes comparable to the eddy turnover time. This dependence, however, can be largely explained in terms of altered flow Reynolds number and the changing range of flow length scales present in the turbulent flow. We argue that removing this dependence is important when studying the Reynolds number dependence of the turbulent collision statistics. The results are also compared to those based on a deterministic forcing scheme to better understand the role of large-scale forcing, relative to that of the small-scale turbulence, on turbulent collision of inertial particles. To further elucidate the correlation between the altered flow structures and dynamics of inertial particles, a conditional analysis has been performed, showing that the regions of higher collision rate of inertial particles are well correlated with the regions of lower vorticity. Regions of higher concentration of pairs at contact are found to be highly correlated with the region of high energy dissipation rate.« less

  2. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    NASA Astrophysics Data System (ADS)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  3. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    NASA Astrophysics Data System (ADS)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially increasing data volumes at NCI. Traditional HPC and data environments are still made available in a way that flexibly provides the tools, services and supporting software systems on these new petascale infrastructures. But to enable the research to take place at this scale, the data, metadata and software now need to evolve together - creating a new integrated high performance infrastructure. The new infrastructure at NCI currently supports a catalogue of integrated, reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. One of the challenges for NCI has been to support existing techniques and methods, while carefully preparing the underlying infrastructure for the transition needed for the next class of Data-intensive Science. In doing so, a flexible range of techniques and software can be made available for application across the corpus of data collections available, and to provide a new infrastructure for future interdisciplinary research.

  4. Implementing High-Performance Geometric Multigrid Solver with Naturally Grained Messages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Zheng, Yili

    2015-10-26

    Structured-grid linear solvers often require manually packing and unpacking of communication data to achieve high performance.Orchestrating this process efficiently is challenging, labor-intensive, and potentially error-prone.In this paper, we explore an alternative approach that communicates the data with naturally grained messagesizes without manual packing and unpacking. This approach is the distributed analogue of shared-memory programming, taking advantage of the global addressspace in PGAS languages to provide substantial programming ease. However, its performance may suffer from the large number of small messages. We investigate theruntime support required in the UPC ++ library for this naturally grained version to close the performance gapmore » between the two approaches and attain comparable performance at scale using the High-Performance Geometric Multgrid (HPGMG-FV) benchmark as a driver.« less

  5. The influence of cognitive load on spatial search performance.

    PubMed

    Longstaffe, Kate A; Hood, Bruce M; Gilchrist, Iain D

    2014-01-01

    During search, executive function enables individuals to direct attention to potential targets, remember locations visited, and inhibit distracting information. In the present study, we investigated these executive processes in large-scale search. In our tasks, participants searched a room containing an array of illuminated locations embedded in the floor. The participants' task was to press the switches at the illuminated locations on the floor so as to locate a target that changed color when pressed. The perceptual salience of the search locations was manipulated by having some locations flashing and some static. Participants were more likely to search at flashing locations, even when they were explicitly informed that the target was equally likely to be at any location. In large-scale search, attention was captured by the perceptual salience of the flashing lights, leading to a bias to explore these targets. Despite this failure of inhibition, participants were able to restrict returns to previously visited locations, a measure of spatial memory performance. Participants were more able to inhibit exploration to flashing locations when they were not required to remember which locations had previously been visited. A concurrent digit-span memory task further disrupted inhibition during search, as did a concurrent auditory attention task. These experiments extend a load theory of attention to large-scale search, which relies on egocentric representations of space. High cognitive load on working memory leads to increased distractor interference, providing evidence for distinct roles for the executive subprocesses of memory and inhibition during large-scale search.

  6. MAINTAINING DATA QUALITY IN THE PERFORMANCE OF A LARGE SCALE INTEGRATED MONITORING EFFORT

    EPA Science Inventory

    Macauley, John M. and Linda C. Harwell. In press. Maintaining Data Quality in the Performance of a Large Scale Integrated Monitoring Effort (Abstract). To be presented at EMAP Symposium 2004: Integrated Monitoring and Assessment for Effective Water Quality Management, 3-7 May 200...

  7. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  8. Advanced Nanostructured Anode Materials for Sodium-Ion Batteries.

    PubMed

    Wang, Qidi; Zhao, Chenglong; Lu, Yaxiang; Li, Yunming; Zheng, Yuheng; Qi, Yuruo; Rong, Xiaohui; Jiang, Liwei; Qi, Xinguo; Shao, Yuanjun; Pan, Du; Li, Baohua; Hu, Yong-Sheng; Chen, Liquan

    2017-11-01

    Sodium-ion batteries (NIBs), due to the advantages of low cost and relatively high safety, have attracted widespread attention all over the world, making them a promising candidate for large-scale energy storage systems. However, the inherent lower energy density to lithium-ion batteries is the issue that should be further investigated and optimized. Toward the grid-level energy storage applications, designing and discovering appropriate anode materials for NIBs are of great concern. Although many efforts on the improvements and innovations are achieved, several challenges still limit the current requirements of the large-scale application, including low energy/power densities, moderate cycle performance, and the low initial Coulombic efficiency. Advanced nanostructured strategies for anode materials can significantly improve ion or electron transport kinetic performance enhancing the electrochemical properties of battery systems. Herein, this Review intends to provide a comprehensive summary on the progress of nanostructured anode materials for NIBs, where representative examples and corresponding storage mechanisms are discussed. Meanwhile, the potential directions to obtain high-performance anode materials of NIBs are also proposed, which provide references for the further development of advanced anode materials for NIBs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. High-Temperature-Short-Time Annealing Process for High-Performance Large-Area Perovskite Solar Cells.

    PubMed

    Kim, Minjin; Kim, Gi-Hwan; Oh, Kyoung Suk; Jo, Yimhyun; Yoon, Hyun; Kim, Ka-Hyun; Lee, Heon; Kim, Jin Young; Kim, Dong Suk

    2017-06-27

    Organic-inorganic hybrid metal halide perovskite solar cells (PSCs) are attracting tremendous research interest due to their high solar-to-electric power conversion efficiency with a high possibility of cost-effective fabrication and certified power conversion efficiency now exceeding 22%. Although many effective methods for their application have been developed over the past decade, their practical transition to large-size devices has been restricted by difficulties in achieving high performance. Here we report on the development of a simple and cost-effective production method with high-temperature and short-time annealing processing to obtain uniform, smooth, and large-size grain domains of perovskite films over large areas. With high-temperature short-time annealing at 400 °C for 4 s, the perovskite film with an average domain size of 1 μm was obtained, which resulted in fast solvent evaporation. Solar cells fabricated using this processing technique had a maximum power conversion efficiency exceeding 20% over a 0.1 cm 2 active area and 18% over a 1 cm 2 active area. We believe our approach will enable the realization of highly efficient large-area PCSs for practical development with a very simple and short-time procedure. This simple method should lead the field toward the fabrication of uniform large-scale perovskite films, which are necessary for the production of high-efficiency solar cells that may also be applicable to several other material systems for more widespread practical deployment.

  10. Quantum information processing with long-wavelength radiation

    NASA Astrophysics Data System (ADS)

    Murgia, David; Weidt, Sebastian; Randall, Joseph; Lekitsch, Bjoern; Webster, Simon; Navickas, Tomas; Grounds, Anton; Rodriguez, Andrea; Webb, Anna; Standing, Eamon; Pearce, Stuart; Sari, Ibrahim; Kiang, Kian; Rattanasonti, Hwanjit; Kraft, Michael; Hensinger, Winfried

    To this point, the entanglement of ions has predominantly been performed using lasers. Using long wavelength radiation with static magnetic field gradients provides an architecture to simplify construction of a large scale quantum computer. The use of microwave-dressed states protects against decoherence from fluctuating magnetic fields, with radio-frequency fields used for qubit manipulation. I will report the realisation of spin-motion entanglement using long-wavelength radiation, and a new method to efficiently prepare dressed-state qubits and qutrits, reducing experimental complexity of gate operations. I will also report demonstration of ground state cooling using long wavelength radiation, which may increase two-qubit entanglement fidelity. I will then report demonstration of a high-fidelity long-wavelength two-ion quantum gate using dressed states. Combining these results with microfabricated ion traps allows for scaling towards a large scale ion trap quantum computer, and provides a platform for quantum simulations of fundamental physics. I will report progress towards the operation of microchip ion traps with extremely high magnetic field gradients for multi-ion quantum gates.

  11. Design of composite flywheel rotors with soft cores

    NASA Astrophysics Data System (ADS)

    Kim, Taehan

    A flywheel is an inertial energy storage system in which the energy or momentum is stored in a rotating mass. Over the last twenty years, high-performance flywheels have been developed with significant improvements, showing potential as energy storage systems in a wide range of applications. Despite the great advances in fundamental knowledge and technology, the current successful rotors depend mainly on the recent developments of high-stiffness and high-strength carbon composites. These composites are expensive and the cost of flywheels made of them is high. The ultimate goal of the study presented here is the development of a cost-effective composite rotor made of a hybrid material. In this study, two-dimensional and three-dimensional analysis tools were developed and utilized in the design of the composite rim, and extensive spin tests were performed to validate the designed rotors and give a sound basis for large-scale rotor design. Hybrid rims made of several different composite materials can effectively reduce the radial stress in the composite rim, which is critical in the design of composite rims. Since the hybrid composite rims we studied employ low-cost glass fiber for the inside of the rim, and the result is large radial growth of the hybrid rim, conventional metallic hubs cannot be used in this design. A soft core developed in this study was successfully able to accommodate the large radial growth of the rim. High bonding strength at the shaft-to-core interface was achieved by the soft core being molded directly onto the steel shaft, and a tapered geometry was used to avoid stress concentrations at the shaft-to-core interface. Extensive spin tests were utilized for reverse engineering of the design of composite rotors, and there was good correlation between tests and analysis. A large-scale composite rotor for ground transportation is presented with the performance levels predicted for it.

  12. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  13. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shamis, Pavel; Graham, Richard L; Gorentla Venkata, Manjunath

    The scalability and performance of collective communication operations limit the scalability and performance of many scientific applications. This paper presents two new blocking and nonblocking Broadcast algorithms for communicators with arbitrary communication topology, and studies their performance. These algorithms benefit from increased concurrency and a reduced memory footprint, making them suitable for use on large-scale systems. Measuring small, medium, and large data Broadcasts on a Cray-XT5, using 24,576 MPI processes, the Cheetah algorithms outperform the native MPI on that system by 51%, 69%, and 9%, respectively, at the same process count. These results demonstrate an algorithmic approach to the implementationmore » of the important class of collective communications, which is high performing, scalable, and also uses resources in a scalable manner.« less

  15. Large-scale seismic waveform quality metric calculation using Hadoop

    NASA Astrophysics Data System (ADS)

    Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.

    2016-09-01

    In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.

  16. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  17. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  18. Spatio-temporal modeling and optimization of a deformable-grating compressor for short high-energy laser pulses

    DOE PAGES

    Qiao, Jie; Papa, J.; Liu, X.

    2015-09-24

    Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less

  19. Hybrid-optimization strategy for the communication of large-scale Kinetic Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wu, Baodong; Li, Shigang; Zhang, Yunquan; Nie, Ningming

    2017-02-01

    The parallel Kinetic Monte Carlo (KMC) algorithm based on domain decomposition has been widely used in large-scale physical simulations. However, the communication overhead of the parallel KMC algorithm is critical, and severely degrades the overall performance and scalability. In this paper, we present a hybrid optimization strategy to reduce the communication overhead for the parallel KMC simulations. We first propose a communication aggregation algorithm to reduce the total number of messages and eliminate the communication redundancy. Then, we utilize the shared memory to reduce the memory copy overhead of the intra-node communication. Finally, we optimize the communication scheduling using the neighborhood collective operations. We demonstrate the scalability and high performance of our hybrid optimization strategy by both theoretical and experimental analysis. Results show that the optimized KMC algorithm exhibits better performance and scalability than the well-known open-source library-SPPARKS. On 32-node Xeon E5-2680 cluster (total 640 cores), the optimized algorithm reduces the communication time by 24.8% compared with SPPARKS.

  20. Large-scale generation of human iPSC-derived neural stem cells/early neural progenitor cells and their neuronal differentiation.

    PubMed

    D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L

    2014-01-01

    Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.

  1. Advances in compact manufacturing for shape and performance controllability of large-scale components-a review

    NASA Astrophysics Data System (ADS)

    Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Ju, Li

    2017-01-01

    Research on compact manufacturing technology for shape and performance controllability of metallic components can realize the simplification and high-reliability of manufacturing process on the premise of satisfying the requirement of macro/micro-structure. It is not only the key paths in improving performance, saving material and energy, and green manufacturing of components used in major equipments, but also the challenging subjects in frontiers of advanced plastic forming. To provide a novel horizon for the manufacturing in the critical components is significant. Focused on the high-performance large-scale components such as bearing rings, flanges, railway wheels, thick-walled pipes, etc, the conventional processes and their developing situations are summarized. The existing problems including multi-pass heating, wasting material and energy, high cost and high-emission are discussed, and the present study unable to meet the manufacturing in high-quality components is also pointed out. Thus, the new techniques related to casting-rolling compound precise forming of rings, compact manufacturing for duplex-metal composite rings, compact manufacturing for railway wheels, and casting-extruding continuous forming of thick-walled pipes are introduced in detail, respectively. The corresponding research contents, such as casting ring blank, hot ring rolling, near solid-state pressure forming, hot extruding, are elaborated. Some findings in through-thickness microstructure evolution and mechanical properties are also presented. The components produced by the new techniques are mainly characterized by fine and homogeneous grains. Moreover, the possible directions for further development of those techniques are suggested. Finally, the key scientific problems are first proposed. All of these results and conclusions have reference value and guiding significance for the integrated control of shape and performance in advanced compact manufacturing.

  2. High performance computing and communications: Advancing the frontiers of information technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less

  3. Simulations of turbulent rotating flows using a subfilter scale stress model derived from the partially integrated transport modeling method

    NASA Astrophysics Data System (ADS)

    Chaouat, Bruno

    2012-04-01

    The partially integrated transport modeling (PITM) method [B. Chaouat and R. Schiestel, "A new partially integrated transport model for subgrid-scale stresses and dissipation rate for turbulent developing flows," Phys. Fluids 17, 065106 (2005), 10.1063/1.1928607; R. Schiestel and A. Dejoan, "Towards a new partially integrated transport model for coarse grid and unsteady turbulent flow simulations," Theor. Comput. Fluid Dyn. 18, 443 (2005), 10.1007/s00162-004-0155-z; B. Chaouat and R. Schiestel, "From single-scale turbulence models to multiple-scale and subgridscale models by Fourier transform," Theor. Comput. Fluid Dyn. 21, 201 (2007), 10.1007/s00162-007-0044-3; B. Chaouat and R. Schiestel, "Progress in subgrid-scale transport modelling for continuous hybrid non-zonal RANS/LES simulations," Int. J. Heat Fluid Flow 30, 602 (2009), 10.1016/j.ijheatfluidflow.2009.02.021] viewed as a continuous approach for hybrid RANS/LES (Reynolds averaged Navier-Stoke equations/large eddy simulations) simulations with seamless coupling between RANS and LES regions is used to derive a subfilter scale stress model in the framework of second-moment closure applicable in a rotating frame of reference. This present subfilter scale model is based on the transport equations for the subfilter stresses and the dissipation rate and appears well appropriate for simulating unsteady flows on relatively coarse grids or flows with strong departure from spectral equilibrium because the cutoff wave number can be located almost anywhere inside the spectrum energy. According to the spectral theory developed in the wave number space [B. Chaouat and R. Schiestel, "From single-scale turbulence models to multiple-scale and subgrid-scale models by Fourier transform," Theor. Comput. Fluid Dyn. 21, 201 (2007), 10.1007/s00162-007-0044-3], the coefficients used in this model are no longer constants but they are some analytical functions of a dimensionless parameter controlling the spectral distribution of turbulence. The pressure-strain correlation term encompassed in this model is inspired from the nonlinear SSG model [C. G. Speziale, S. Sarkar, and T. B. Gatski, "Modelling the pressure-strain correlation of turbulence: an invariant dynamical systems approach," J. Fluid Mech. 227, 245 (1991), 10.1017/S0022112091000101] developed initially for homogeneous rotating flows in RANS methodology. It is modeled in system rotation using the principle of objectivity. Its modeling is especially extended in a low Reynolds number version for handling non-homogeneous wall flows. The present subfilter scale stress model is then used for simulating large scales of rotating turbulent flows on coarse and medium grids at moderate, medium, and high rotation rates. It is also applied to perform a simulation on a refined grid at the highest rotation rate. As a result, it is found that the PITM simulations reproduce fairly well the mean features of rotating channel flows allowing a drastic reduction of the computational cost in comparison with the one required for performing highly resolved LES. Overall, the mean velocities and turbulent stresses are found to be in good agreement with the data of highly resolved LES [E. Lamballais, O. Metais, and M. Lesieur, "Spectral-dynamic model for large-eddy simulations of turbulent rotating flow," Theor. Comput. Fluid Dyn. 12, 149 (1998)]. The anisotropy character of the flow resulting from the rotation effects is also well reproduced in accordance with the reference data. Moreover, the PITM2 simulations performed on the medium grid predict qualitatively well the three-dimensional flow structures as well as the longitudinal roll cells which appear in the anticyclonic wall-region of the rotating flows. As expected, the PITM3 simulation performed on the refined grid reverts to highly resolved LES. The present model based on a rational formulation appears to be an interesting candidate for tackling a large variety of engineering flows subjected to rotation.

  4. Design and development of a large diameter high pressure fast acting propulsion valve and valve actuator

    NASA Technical Reports Server (NTRS)

    Srinivasan, K. V.

    1986-01-01

    The design and development of a large diameter high pressure quick acting propulsion valve and valve actuator is described. The valve is the heart of a major test facility dedicated to conducting full scale performance tests of aircraft landing systems. The valve opens in less than 300 milliseconds releasing a 46-centimeter- (18-in.-) diameter water jet and closes in 300 milliseconds. The four main components of the valve, i.e., valve body, safety shutter, high speed shutter, and pneumatic-hydraulic actuator, are discussed. This valve is unique and may have other aerospace and industrial applications.

  5. Design and Development of a Large Diameter, High Pressure, Fast Acting Propulsion Valve and Valve Actuator

    NASA Technical Reports Server (NTRS)

    Srinivasan, K. V.

    1986-01-01

    This paper describes the design and development of a large diameter high pressure quick acting propulsion valve and valve actuator. The valve is the heart of a major test facility dedicated to conducting full scale performance tests of aircraft landing gear systems. The valve opens in less than 300 milliseconds releasing a 46 cm (18 in) diameter water jet and closes in 300 milliseconds. The four main components of the valve, i.e., valve body, safety shutter, high speed shutter, and pneumatic-hydraulic actuator, are discussed. This valve is unique and may have other aerospace and industrial applications.

  6. Climatic and Catchment-Scale Predictors of Chinese Stream Insect Richness Differ between Taxonomic Groups

    PubMed Central

    Tonkin, Jonathan D.; Shah, Deep Narayan; Kuemmerlen, Mathias; Li, Fengqing; Cai, Qinghua; Haase, Peter; Jähnig, Sonja C.

    2015-01-01

    Little work has been done on large-scale patterns of stream insect richness in China. We explored the influence of climatic and catchment-scale factors on stream insect (Ephemeroptera, Plecoptera, Trichoptera; EPT) richness across mid-latitude China. We assessed the predictive ability of climatic, catchment land cover and physical structure variables on genus richness of EPT, both individually and combined, in 80 mid-latitude Chinese streams, spanning a 3899-m altitudinal gradient. We performed analyses using boosted regression trees and explored the nature of their influence on richness patterns. The relative importance of climate, land cover, and physical factors on stream insect richness varied considerably between the three orders, and while important for Ephemeroptera and Plecoptera, latitude did not improve model fit for any of the groups. EPT richness was linked with areas comprising high forest cover, elevation and slope, large catchments and low temperatures. Ephemeroptera favoured areas with high forest cover, medium-to-large catchment sizes, high temperature seasonality, and low potential evapotranspiration. Plecoptera richness was linked with low temperature seasonality and annual mean, and high slope, elevation and warm-season rainfall. Finally, Trichoptera favoured high elevation areas, with high forest cover, and low mean annual temperature, seasonality and aridity. Our findings highlight the variable role that catchment land cover, physical properties and climatic influences have on stream insect richness. This is one of the first studies of its kind in Chinese streams, thus we set the scene for more in-depth assessments of stream insect richness across broader spatial scales in China, but stress the importance of improving data availability and consistency through time. PMID:25909190

  7. Past and present cosmic structure in the SDSS DR7 main sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasche, J.; Leclercq, F.; Wandelt, B.D., E-mail: jasche@iap.fr, E-mail: florent.leclercq@polytechnique.org, E-mail: wandelt@iap.fr

    2015-01-01

    We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structuremore » formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ∼ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.« less

  8. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  9. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Nanocomposite of polyaniline nanorods grown on graphene nanoribbons for highly capacitive pseudocapacitors.

    PubMed

    Li, Lei; Raji, Abdul-Rahman O; Fei, Huilong; Yang, Yang; Samuel, Errol L G; Tour, James M

    2013-07-24

    A facile and cost-effective approach to the fabrication of a nanocomposite material of polyaniline (PANI) and graphene nanoribbons (GNRs) has been developed. The morphology of the composite was characterized by scanning electron microscopy, transmission electron microscopy, X-ray photoelectron microscopy, and X-ray diffraction analysis. The resulting composite has a high specific capacitance of 340 F/g and stable cycling performance with 90% capacitance retention over 4200 cycles. The high performance of the composite results from the synergistic combination of electrically conductive GNRs and highly capacitive PANI. The method developed here is practical for large-scale development of pseudocapacitor electrodes for energy storage.

  11. Design of coated standing nanowire array solar cell performing beyond the planar efficiency limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Yang; Ye, Qinghao; Shen, Wenzhong, E-mail: wzshen@sjtu.edu.cn

    2016-05-28

    The single standing nanowire (SNW) solar cells have been proven to perform beyond the planar efficiency limits in both open-circuit voltage and internal quantum efficiency due to the built-in concentration and the shifting of the absorption front. However, the expandability of these nano-scale units to a macro-scale photovoltaic device remains unsolved. The main difficulty lies in the simultaneous preservation of an effective built-in concentration in each unit cell and a broadband high absorption capability of their array. Here, we have provided a detailed theoretical guideline for realizing a macro-scale solar cell that performs furthest beyond the planar limits. The keymore » lies in a complementary design between the light-trapping of the single SNWs and that of the photonic crystal slab formed by the array. By tuning the hybrid HE modes of the SNWs through the thickness of a coaxial dielectric coating, the optimized coated SNW array can sustain an absorption rate over 97.5% for a period as large as 425 nm, which, together with the inherited carrier extraction advantage, leads to a cell efficiency increment of 30% over the planar limit. This work has demonstrated the viability of a large-size solar cell that performs beyond the planar limits.« less

  12. Large eddy simulations of compressible magnetohydrodynamic turbulence

    NASA Astrophysics Data System (ADS)

    Grete, Philipp

    2017-02-01

    Supersonic, magnetohydrodynamic (MHD) turbulence is thought to play an important role in many processes - especially in astrophysics, where detailed three-dimensional observations are scarce. Simulations can partially fill this gap and help to understand these processes. However, direct simulations with realistic parameters are often not feasible. Consequently, large eddy simulations (LES) have emerged as a viable alternative. In LES the overall complexity is reduced by simulating only large and intermediate scales directly. The smallest scales, usually referred to as subgrid-scales (SGS), are introduced to the simulation by means of an SGS model. Thus, the overall quality of an LES with respect to properly accounting for small-scale physics crucially depends on the quality of the SGS model. While there has been a lot of successful research on SGS models in the hydrodynamic regime for decades, SGS modeling in MHD is a rather recent topic, in particular, in the compressible regime. In this thesis, we derive and validate a new nonlinear MHD SGS model that explicitly takes compressibility effects into account. A filter is used to separate the large and intermediate scales, and it is thought to mimic finite resolution effects. In the derivation, we use a deconvolution approach on the filter kernel. With this approach, we are able to derive nonlinear closures for all SGS terms in MHD: the turbulent Reynolds and Maxwell stresses, and the turbulent electromotive force (EMF). We validate the new closures both a priori and a posteriori. In the a priori tests, we use high-resolution reference data of stationary, homogeneous, isotropic MHD turbulence to compare exact SGS quantities against predictions by the closures. The comparison includes, for example, correlations of turbulent fluxes, the average dissipative behavior, and alignment of SGS vectors such as the EMF. In order to quantify the performance of the new nonlinear closure, this comparison is conducted from the subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.

  13. Imbalance aware lithography hotspot detection: a deep learning approach

    NASA Astrophysics Data System (ADS)

    Yang, Haoyu; Luo, Luyang; Su, Jing; Lin, Chenxi; Yu, Bei

    2017-07-01

    With the advancement of very large scale integrated circuits (VLSI) technology nodes, lithographic hotspots become a serious problem that affects manufacture yield. Lithography hotspot detection at the post-OPC stage is imperative to check potential circuit failures when transferring designed patterns onto silicon wafers. Although conventional lithography hotspot detection methods, such as machine learning, have gained satisfactory performance, with the extreme scaling of transistor feature size and layout patterns growing in complexity, conventional methodologies may suffer from performance degradation. For example, manual or ad hoc feature extraction in a machine learning framework may lose important information when predicting potential errors in ultra-large-scale integrated circuit masks. We present a deep convolutional neural network (CNN) that targets representative feature learning in lithography hotspot detection. We carefully analyze the impact and effectiveness of different CNN hyperparameters, through which a hotspot-detection-oriented neural network model is established. Because hotspot patterns are always in the minority in VLSI mask design, the training dataset is highly imbalanced. In this situation, a neural network is no longer reliable, because a trained model with high classification accuracy may still suffer from a high number of false negative results (missing hotspots), which is fatal in hotspot detection problems. To address the imbalance problem, we further apply hotspot upsampling and random-mirror flipping before training the network. Experimental results show that our proposed neural network model achieves comparable or better performance on the ICCAD 2012 contest benchmark compared to state-of-the-art hotspot detectors based on deep or representative machine leaning.

  14. Harnessing Diversity towards the Reconstructing of Large Scale Gene Regulatory Networks

    PubMed Central

    Yamanaka, Ryota; Kitano, Hiroaki

    2013-01-01

    Elucidating gene regulatory network (GRN) from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i) a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii) TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks. PMID:24278007

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ji-Hui; Yuan, Qinghong; Deng, Huixiong

    Current thermoelectric (TE) materials often have low performance or contain less abundant and/or toxic elements, thus limiting their large-scale applications. Therefore, new TE materials with high efficiency and low cost are strongly desirable. Here we demonstrate that SiS and SiSe monolayers made from nontoxic and earth-abundant elements intrinsically have low thermal conductivities arising from their low-frequency optical phonon branches with large overlaps with acoustic phonon modes, which is similar to the state-of-the-art experimentally demonstrated material SnSe with a layered structure. Together with high thermal power factors due to their two-dimensional nature, they show promising TE performances with large figure ofmore » merit (ZT) values exceeding 1 or 2 over a wide range of temperatures. We establish some basic understanding of identifying layered materials with low thermal conductivities, which can guide and stimulate the search and study of other layered materials for TE applications.« less

  16. High-Performance Carbon Dioxide Electrocatalytic Reduction by Easily Fabricated Large-Scale Silver Nanowire Arrays.

    PubMed

    Luan, Chuhao; Shao, Yang; Lu, Qi; Gao, Shenghan; Huang, Kai; Wu, Hui; Yao, Kefu

    2018-05-30

    An efficient and selective catalyst is in urgent need for carbon dioxide electroreduction and silver is one of the promising candidates with affordable costs. Here we fabricated large-scale vertically standing Ag nanowire arrays with high crystallinity and electrical conductivity as carbon dioxide electroreduction catalysts by a simple nanomolding method that was usually considered not feasible for metallic crystalline materials. A great enhancement of current densities and selectivity for CO at moderate potentials was achieved. The current density for CO ( j co ) of Ag nanowire array with 200 nm in diameter was more than 2500 times larger than that of Ag foil at an overpotential of 0.49 V with an efficiency over 90%. The origin of enhanced performances are attributed to greatly increased electrochemically active surface area (ECSA) and higher intrinsic activity compared to those of polycrystalline Ag foil. More low-coordinated sites on the nanowires which can stabilize the CO 2 intermediate better are responsible for the high intrinsic activity. In addition, the impact of surface morphology that induces limited mass transportation on reaction selectivity and efficiency of nanowire arrays with different diameters was also discussed.

  17. Recent research progress on iron- and manganese-based positive electrode materials for rechargeable sodium batteries

    PubMed Central

    Yabuuchi, Naoaki; Komaba, Shinichi

    2014-01-01

    Large-scale high-energy batteries with electrode materials made from the Earth-abundant elements are needed to achieve sustainable energy development. On the basis of material abundance, rechargeable sodium batteries with iron- and manganese-based positive electrode materials are the ideal candidates for large-scale batteries. In this review, iron- and manganese-based electrode materials, oxides, phosphates, fluorides, etc, as positive electrodes for rechargeable sodium batteries are reviewed. Iron and manganese compounds with sodium ions provide high structural flexibility. Two layered polymorphs, O3- and P2-type layered structures, show different electrode performance in Na cells related to the different phase transition and sodium migration processes on sodium extraction/insertion. Similar to layered oxides, iron/manganese phosphates and pyrophosphates also provide the different framework structures, which are used as sodium insertion host materials. Electrode performance and reaction mechanisms of the iron- and manganese-based electrode materials in Na cells are described and the similarities and differences with lithium counterparts are also discussed. Together with these results, the possibility of the high-energy battery system with electrode materials made from only Earth-abundant elements is reviewed. PMID:27877694

  18. Recent research progress on iron- and manganese-based positive electrode materials for rechargeable sodium batteries.

    PubMed

    Yabuuchi, Naoaki; Komaba, Shinichi

    2014-08-01

    Large-scale high-energy batteries with electrode materials made from the Earth-abundant elements are needed to achieve sustainable energy development. On the basis of material abundance, rechargeable sodium batteries with iron- and manganese-based positive electrode materials are the ideal candidates for large-scale batteries. In this review, iron- and manganese-based electrode materials, oxides, phosphates, fluorides, etc, as positive electrodes for rechargeable sodium batteries are reviewed. Iron and manganese compounds with sodium ions provide high structural flexibility. Two layered polymorphs, O3- and P2-type layered structures, show different electrode performance in Na cells related to the different phase transition and sodium migration processes on sodium extraction/insertion. Similar to layered oxides, iron/manganese phosphates and pyrophosphates also provide the different framework structures, which are used as sodium insertion host materials. Electrode performance and reaction mechanisms of the iron- and manganese-based electrode materials in Na cells are described and the similarities and differences with lithium counterparts are also discussed. Together with these results, the possibility of the high-energy battery system with electrode materials made from only Earth-abundant elements is reviewed.

  19. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  20. Projection Effects of Large-scale Structures on Weak-lensing Peak Abundances

    NASA Astrophysics Data System (ADS)

    Yuan, Shuo; Liu, Xiangkun; Pan, Chuzhong; Wang, Qiao; Fan, Zuhui

    2018-04-01

    High peaks in weak lensing (WL) maps originate dominantly from the lensing effects of single massive halos. Their abundance is therefore closely related to the halo mass function and thus a powerful cosmological probe. However, besides individual massive halos, large-scale structures (LSS) along lines of sight also contribute to the peak signals. In this paper, with ray-tracing simulations, we investigate the LSS projection effects. We show that for current surveys with a large shape noise, the stochastic LSS effects are subdominant. For future WL surveys with source galaxies having a median redshift z med ∼ 1 or higher, however, they are significant. For the cosmological constraints derived from observed WL high-peak counts, severe biases can occur if the LSS effects are not taken into account properly. We extend the model of Fan et al. by incorporating the LSS projection effects into the theoretical considerations. By comparing with simulation results, we demonstrate the good performance of the improved model and its applicability in cosmological studies.

  1. Novel Miscanthus Germplasm-Based Value Chains: A Life Cycle Assessment

    PubMed Central

    Wagner, Moritz; Kiesel, Andreas; Hastings, Astley; Iqbal, Yasir; Lewandowski, Iris

    2017-01-01

    In recent years, considerable progress has been made in miscanthus research: improvement of management practices, breeding of new genotypes, especially for marginal conditions, and development of novel utilization options. The purpose of the current study was a holistic analysis of the environmental performance of such novel miscanthus-based value chains. In addition, the relevance of the analyzed environmental impact categories was assessed. A Life Cycle Assessment was conducted to analyse the environmental performance of the miscanthus-based value chains in 18 impact categories. In order to include the substitution of a reference product, a system expansion approach was used. In addition, a normalization step was applied. This allowed the relevance of these impact categories to be evaluated for each utilization pathway. The miscanthus was cultivated on six sites in Europe (Aberystwyth, Adana, Moscow, Potash, Stuttgart and Wageningen) and the biomass was utilized in the following six pathways: (1) small-scale combustion (heat)—chips; (2) small-scale combustion (heat)—pellets; (3) large-scale combustion (CHP)—biomass baled for transport and storage; (4) large-scale combustion (CHP)—pellets; (5) medium-scale biogas plant—ensiled miscanthus biomass; and (6) large-scale production of insulation material. Thus, in total, the environmental performance of 36 site × pathway combinations was assessed. The comparatively high normalized results of human toxicity, marine, and freshwater ecotoxicity, and freshwater eutrophication indicate the relevance of these impact categories in the assessment of miscanthus-based value chains. Differences between the six sites can almost entirely be attributed to variations in biomass yield. However, the environmental performance of the utilization pathways analyzed varied widely. The largest differences were shown for freshwater and marine ecotoxicity, and freshwater eutrophication. The production of insulation material had the lowest impact on the environment, with net benefits in all impact categories expect three (marine eutrophication, human toxicity, agricultural land occupation). This performance can be explained by the multiple use of the biomass, first as material and subsequently as an energy carrier, and by the substitution of an emission-intensive reference product. The results of this study emphasize the importance of assessing all environmental impacts when selecting appropriate utilization pathways. PMID:28642784

  2. Parallel Domain Decomposition Formulation and Software for Large-Scale Sparse Symmetrical/Unsymmetrical Aeroacoustic Applications

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Watson, Willie R. (Technical Monitor)

    2005-01-01

    The overall objectives of this research work are to formulate and validate efficient parallel algorithms, and to efficiently design/implement computer software for solving large-scale acoustic problems, arised from the unified frameworks of the finite element procedures. The adopted parallel Finite Element (FE) Domain Decomposition (DD) procedures should fully take advantages of multiple processing capabilities offered by most modern high performance computing platforms for efficient parallel computation. To achieve this objective. the formulation needs to integrate efficient sparse (and dense) assembly techniques, hybrid (or mixed) direct and iterative equation solvers, proper pre-conditioned strategies, unrolling strategies, and effective processors' communicating schemes. Finally, the numerical performance of the developed parallel finite element procedures will be evaluated by solving series of structural, and acoustic (symmetrical and un-symmetrical) problems (in different computing platforms). Comparisons with existing "commercialized" and/or "public domain" software are also included, whenever possible.

  3. System design and integration of the large-scale advanced prop-fan

    NASA Technical Reports Server (NTRS)

    Huth, B. P.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.

  4. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  5. Large Scale Cross Drive Correlation Of Digital Media

    DTIC Science & Technology

    2016-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS LARGE SCALE CROSS-DRIVE CORRELATION OF DIGITAL MEDIA by Joseph Van Bruaene March 2016 Thesis Co...CROSS-DRIVE CORRELATION OF DIGITAL MEDIA 5. FUNDING NUMBERS 6. AUTHOR(S) Joseph Van Bruaene 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...the ability to make large scale cross-drive correlations among a large corpus of digital media becomes increasingly important. We propose a

  6. Automated microscopy for high-content RNAi screening

    PubMed Central

    2010-01-01

    Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920

  7. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.

    PubMed

    Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil

    2005-03-01

    We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.

  8. Just enough inflation: power spectrum modifications at large scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar

    2014-12-01

    We show that models of 'just enough' inflation, where the slow-roll evolution lasted only 50- 60 e-foldings, feature modifications of the CMB power spectrum at large angular scales. We perform a systematic analytic analysis in the limit of a sudden transition between any possible non-slow-roll background evolution and the final stage of slow-roll inflation. We find a high degree of universality since most common backgrounds like fast-roll evolution, matter or radiation-dominance give rise to a power loss at large angular scales and a peak together with an oscillatory behaviour at scales around the value of the Hubble parameter at themore » beginning of slow-roll inflation. Depending on the value of the equation of state parameter, different pre-inflationary epochs lead instead to an enhancement of power at low ℓ, and so seem disfavoured by recent observational hints for a lack of CMB power at ℓ∼< 40. We also comment on the importance of initial conditions and the possibility to have multiple pre-inflationary stages.« less

  9. High-Performance Cryogenic Designs for OMEGA and the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Goncharov, V. N.; Collins, T. J. B.; Marozas, J. A.; Regan, S. P.; Betti, R.; Boehly, T. R.; Campbell, E. M.; Froula, D. H.; Igumenshchev, I. V.; McCrory, R. L.; Myatt, J. F.; Radha, P. B.; Sangster, T. C.; Shvydky, A.

    2016-10-01

    The main advantage of laser symmetric direct drive (SDD) is a significantly higher coupled drive laser energy to the hot-spot internal energy at stagnation compared to that of laser indirect drive. Because of coupling losses resulting from cross-beam energy transfer (CBET), however, reaching ignition conditions on the NIF with SDD requires designs with excessively large in-flight aspect ratios ( 30). Results of cryogenic implosions performed on OMEGA show that such designs are unstable to short-scale nonuniformity growth during shell implosion. Several CBET reduction strategies have been proposed in the past. This talk will discuss high-performing designs using several CBET-mitigation techniques, including using drive laser beams smaller than the target size and wavelength detuning. Designs that are predicted to reach alpha burning regimes as well as a gain of 10 to 40 at the NIF-scale will be presented. Hydrodynamically scaled OMEGA designs with similar CBET-reduction techniques will also be discussed. This material is based upon work supported by the Department Of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  10. Multiplexed, High Density Electrophysiology with Nanofabricated Neural Probes

    PubMed Central

    Du, Jiangang; Blanche, Timothy J.; Harrison, Reid R.; Lester, Henry A.; Masmanidis, Sotiris C.

    2011-01-01

    Extracellular electrode arrays can reveal the neuronal network correlates of behavior with single-cell, single-spike, and sub-millisecond resolution. However, implantable electrodes are inherently invasive, and efforts to scale up the number and density of recording sites must compromise on device size in order to connect the electrodes. Here, we report on silicon-based neural probes employing nanofabricated, high-density electrical leads. Furthermore, we address the challenge of reading out multichannel data with an application-specific integrated circuit (ASIC) performing signal amplification, band-pass filtering, and multiplexing functions. We demonstrate high spatial resolution extracellular measurements with a fully integrated, low noise 64-channel system weighing just 330 mg. The on-chip multiplexers make possible recordings with substantially fewer external wires than the number of input channels. By combining nanofabricated probes with ASICs we have implemented a system for performing large-scale, high-density electrophysiology in small, freely behaving animals that is both minimally invasive and highly scalable. PMID:22022568

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcellini, P.; Pless, S.; Lobato, C.

    Until recently, large-scale, cost-effective net-zero energy buildings (NZEBs) were thought to lie decades in the future. However, ongoing work at the National Renewable Energy Laboratory (NREL) indicates that NZEB status is both achievable and repeatable today. This paper presents a definition framework for classifying NZEBs and a real-life example that demonstrates how a large-scale office building can cost-effectively achieve net-zero energy. The vision of NZEBs is compelling. In theory, these highly energy-efficient buildings will produce, during a typical year, enough renewable energy to offset the energy they consume from the grid. The NREL NZEB definition framework classifies NZEBs according tomore » the criteria being used to judge net-zero status and the way renewable energy is supplied to achieve that status. We use the new U.S. Department of Energy/NREL 220,000-ft{sub 2} Research Support Facilities (RSF) building to illustrate why a clear picture of NZEB definitions is important and how the framework provides a methodology for creating a cost-effective NZEB. The RSF, scheduled to open in June 2010, includes contractual commitments to deliver a Leadership in Energy Efficiency and Design (LEED) Platinum Rating, an energy use intensity of 25 kBtu/ft{sub 2} (half that of a typical LEED Platinum office building), and net-zero energy status. We will discuss the analysis method and cost tradeoffs that were performed throughout the design and build phases to meet these commitments and maintain construction costs at $259/ft{sub 2}. We will discuss ways to achieve large-scale, replicable NZEB performance. Many passive and renewable energy strategies are utilized, including full daylighting, high-performance lighting, natural ventilation through operable windows, thermal mass, transpired solar collectors, radiant heating and cooling, and workstation configurations allow for maximum daylighting.« less

  12. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  13. Improving efficiency of polystyrene concrete production with composite binders

    NASA Astrophysics Data System (ADS)

    Lesovik, R. V.; Ageeva, M. S.; Lesovik, G. A.; Sopin, D. M.; Kazlitina, O. V.; Mitrokhina, A. A.

    2018-03-01

    According to leading marketing researchers, the construction market in Russia and CIS will continue growing at a rapid rate; this applies not only to a large-scale major construction, but to a construction of single-family houses and small-scale industrial facilities as well. Due to this, there are increased requirements for heat insulation of the building enclosures and a significant demand for efficient walling materials with high thermal performance. All these developments led to higher requirements imposed on the equipment that produces such materials.

  14. Time to "go large" on biofilm research: advantages of an omics approach.

    PubMed

    Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J

    2009-04-01

    In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.

  15. Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions

    DOE PAGES

    McLerran, Larry; Tribedy, Prithwish

    2015-11-02

    High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less

  16. Evaluating the Performance of the Goddard Multi-Scale Modeling Framework against GPM, TRMM and CloudSat/CALIPSO Products

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.

    2014-12-01

    Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.

  17. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  18. Performance model-directed data sieving for high-performance I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how tomore » perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.« less

  19. Large-size porous ZnO flakes with superior gas-sensing performance

    NASA Astrophysics Data System (ADS)

    Wen, Wei; Wu, Jin-Ming; Wang, Yu-De

    2012-06-01

    A simple top-down route is developed to fabricate large size porous ZnO flakes via solution combustion synthesis followed by a subsequent calcination in air, which is template-free and can be easily enlarged to an industrial scale. The achieved porous ZnO flakes, which are tens to hundreds of micrometers in flat and tens of nanometers in thickness, exhibit high response for detecting acetone and ethanol, because the unique two-dimensional architecture shortens effectively the gas diffusion distance and provides highly accessible open channels and active surfaces for the target gas.

  20. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  1. Workshop Report on Additive Manufacturing for Large-Scale Metal Components - Development and Deployment of Metal Big-Area-Additive-Manufacturing (Large-Scale Metals AM) System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babu, Sudarsanam Suresh; Love, Lonnie J.; Peter, William H.

    Additive manufacturing (AM) is considered an emerging technology that is expected to transform the way industry can make low-volume, high value complex structures. This disruptive technology promises to replace legacy manufacturing methods for the fabrication of existing components in addition to bringing new innovation for new components with increased functional and mechanical properties. This report outlines the outcome of a workshop on large-scale metal additive manufacturing held at Oak Ridge National Laboratory (ORNL) on March 11, 2016. The charter for the workshop was outlined by the Department of Energy (DOE) Advanced Manufacturing Office program manager. The status and impact ofmore » the Big Area Additive Manufacturing (BAAM) for polymer matrix composites was presented as the background motivation for the workshop. Following, the extension of underlying technology to low-cost metals was proposed with the following goals: (i) High deposition rates (approaching 100 lbs/h); (ii) Low cost (<$10/lbs) for steel, iron, aluminum, nickel, as well as, higher cost titanium, (iii) large components (major axis greater than 6 ft) and (iv) compliance of property requirements. The above concept was discussed in depth by representatives from different industrial sectors including welding, metal fabrication machinery, energy, construction, aerospace and heavy manufacturing. In addition, DOE’s newly launched High Performance Computing for Manufacturing (HPC4MFG) program was reviewed. This program will apply thermo-mechanical models to elucidate deeper understanding of the interactions between design, process, and materials during additive manufacturing. Following these presentations, all the attendees took part in a brainstorming session where everyone identified the top 10 challenges in large-scale metal AM from their own perspective. The feedback was analyzed and grouped in different categories including, (i) CAD to PART software, (ii) selection of energy source, (iii) systems development, (iv) material feedstock, (v) process planning, (vi) residual stress & distortion, (vii) post-processing, (viii) qualification of parts, (ix) supply chain and (x) business case. Furthermore, an open innovation network methodology was proposed to accelerate the development and deployment of new large-scale metal additive manufacturing technology with the goal of creating a new generation of high deposition rate equipment, affordable feed stocks, and large metallic components to enhance America’s economic competitiveness.« less

  2. Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.

    PubMed

    Baars, W J; Hutchins, N; Marusic, I

    2017-03-13

    Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  3. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  4. Overview of physical dosimetry methods for triage application integrated in the new European network RENEB.

    PubMed

    Trompier, François; Burbidge, Christopher; Bassinet, Céline; Baumann, Marion; Bortolin, Emanuela; De Angelis, Cinzia; Eakins, Jonathan; Della Monaca, Sara; Fattibene, Paola; Quattrini, Maria Cristina; Tanner, Rick; Wieser, Albrecht; Woda, Clemens

    2017-01-01

    In the EC-funded project RENEB (Realizing the European Network in Biodosimetry), physical methods applied to fortuitous dosimetric materials are used to complement biological dosimetry, to increase dose assessment capacity for large-scale radiation/nuclear accidents. This paper describes the work performed to implement Optically Stimulated Luminescence (OSL) and Electron Paramagnetic Resonance (EPR) dosimetry techniques. OSL is applied to electronic components and EPR to touch-screen glass from mobile phones. To implement these new approaches, several blind tests and inter-laboratory comparisons (ILC) were organized for each assay. OSL systems have shown good performances. EPR systems also show good performance in controlled conditions, but ILC have also demonstrated that post-irradiation exposure to sunlight increases the complexity of the EPR signal analysis. Physically-based dosimetry techniques present high capacity, new possibilities for accident dosimetry, especially in the case of large-scale events. Some of the techniques applied can be considered as operational (e.g. OSL on Surface Mounting Devices [SMD]) and provide a large increase of measurement capacity for existing networks. Other techniques and devices currently undergoing validation or development in Europe could lead to considerable increases in the capacity of the RENEB accident dosimetry network.

  5. A cooperative strategy for parameter estimation in large scale systems biology models.

    PubMed

    Villaverde, Alejandro F; Egea, Jose A; Banga, Julio R

    2012-06-22

    Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs ("threads") that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems.

  6. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended to incorporate other global and local search solvers and specific structural information for particular classes of problems. PMID:22727112

  7. Evolutionary conservatism and convergence both lead to striking similarity in ecology, morphology and performance across continents in frogs.

    PubMed

    Moen, Daniel S; Irschick, Duncan J; Wiens, John J

    2013-12-22

    Many clades contain ecologically and phenotypically similar species across continents, yet the processes generating this similarity are largely unstudied, leaving fundamental questions unanswered. Is similarity in morphology and performance across assemblages caused by evolutionary convergence or by biogeographic dispersal of evolutionarily conserved ecotypes? Does convergence to new ecological conditions erase evidence of past adaptation? Here, we analyse ecology, morphology and performance in frog assemblages from three continents (Asia, Australia and South America), assessing the importance of dispersal and convergent evolution in explaining similarity across regions. We find three striking results. First, species using the same microhabitat type are highly similar in morphology and performance across both clades and continents. Second, some species on different continents owe their similarity to dispersal and evolutionary conservatism (rather than evolutionary convergence), even over vast temporal and spatial scales. Third, in one case, an ecologically specialized ancestor radiated into diverse ecotypes that have converged with those on other continents, largely erasing traces of past adaptation to their ancestral ecology. Overall, our study highlights the roles of both evolutionary conservatism and convergence in explaining similarity in species traits over large spatial and temporal scales and demonstrates a statistical framework for addressing these questions in other systems.

  8. Evolutionary conservatism and convergence both lead to striking similarity in ecology, morphology and performance across continents in frogs

    PubMed Central

    Moen, Daniel S.; Irschick, Duncan J.; Wiens, John J.

    2013-01-01

    Many clades contain ecologically and phenotypically similar species across continents, yet the processes generating this similarity are largely unstudied, leaving fundamental questions unanswered. Is similarity in morphology and performance across assemblages caused by evolutionary convergence or by biogeographic dispersal of evolutionarily conserved ecotypes? Does convergence to new ecological conditions erase evidence of past adaptation? Here, we analyse ecology, morphology and performance in frog assemblages from three continents (Asia, Australia and South America), assessing the importance of dispersal and convergent evolution in explaining similarity across regions. We find three striking results. First, species using the same microhabitat type are highly similar in morphology and performance across both clades and continents. Second, some species on different continents owe their similarity to dispersal and evolutionary conservatism (rather than evolutionary convergence), even over vast temporal and spatial scales. Third, in one case, an ecologically specialized ancestor radiated into diverse ecotypes that have converged with those on other continents, largely erasing traces of past adaptation to their ancestral ecology. Overall, our study highlights the roles of both evolutionary conservatism and convergence in explaining similarity in species traits over large spatial and temporal scales and demonstrates a statistical framework for addressing these questions in other systems. PMID:24174109

  9. Microfluidic biolector-microfluidic bioprocess control in microtiter plates.

    PubMed

    Funke, Matthias; Buchenauer, Andreas; Schnakenberg, Uwe; Mokwa, Wilfried; Diederichs, Sylvia; Mertens, Alan; Müller, Carsten; Kensy, Frank; Büchs, Jochen

    2010-10-15

    In industrial-scale biotechnological processes, the active control of the pH-value combined with the controlled feeding of substrate solutions (fed-batch) is the standard strategy to cultivate both prokaryotic and eukaryotic cells. On the contrary, for small-scale cultivations, much simpler batch experiments with no process control are performed. This lack of process control often hinders researchers to scale-up and scale-down fermentation experiments, because the microbial metabolism and thereby the growth and production kinetics drastically changes depending on the cultivation strategy applied. While small-scale batches are typically performed highly parallel and in high throughput, large-scale cultivations demand sophisticated equipment for process control which is in most cases costly and difficult to handle. Currently, there is no technical system on the market that realizes simple process control in high throughput. The novel concept of a microfermentation system described in this work combines a fiber-optic online-monitoring device for microtiter plates (MTPs)--the BioLector technology--together with microfluidic control of cultivation processes in volumes below 1 mL. In the microfluidic chip, a micropump is integrated to realize distinct substrate flow rates during fed-batch cultivation in microscale. Hence, a cultivation system with several distinct advantages could be established: (1) high information output on a microscale; (2) many experiments can be performed in parallel and be automated using MTPs; (3) this system is user-friendly and can easily be transferred to a disposable single-use system. This article elucidates this new concept and illustrates applications in fermentations of Escherichia coli under pH-controlled and fed-batch conditions in shaken MTPs. Copyright 2010 Wiley Periodicals, Inc.

  10. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  11. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF.

    DOT National Transportation Integrated Search

    2012-06-01

    This research program conducted a large experimental program, which consisted of the design, construction, : curing, deterioration, and structural load testing of 16 large-scale column specimens with a critical lap splice : region, and then compared ...

  12. High-performance holographic technologies for fluid-dynamics experiments

    PubMed Central

    Orlov, Sergei S.; Abarzhi, Snezhana I.; Oh, Se Baek; Barbastathis, George; Sreenivasan, Katepalli R.

    2010-01-01

    Modern technologies offer new opportunities for experimentalists in a variety of research areas of fluid dynamics. Improvements are now possible in the state-of-the-art in precision, dynamic range, reproducibility, motion-control accuracy, data-acquisition rate and information capacity. These improvements are required for understanding complex turbulent flows under realistic conditions, and for allowing unambiguous comparisons to be made with new theoretical approaches and large-scale numerical simulations. One of the new technologies is high-performance digital holography. State-of-the-art motion control, electronics and optical imaging allow for the realization of turbulent flows with very high Reynolds number (more than 107) on a relatively small laboratory scale, and quantification of their properties with high space–time resolutions and bandwidth. In-line digital holographic technology can provide complete three-dimensional mapping of the flow velocity and density fields at high data rates (over 1000 frames per second) over a relatively large spatial area with high spatial (1–10 μm) and temporal (better than a few nanoseconds) resolution, and can give accurate quantitative description of the fluid flows, including those of multi-phase and unsteady conditions. This technology can be applied in a variety of problems to study fundamental properties of flow–particle interactions, rotating flows, non-canonical boundary layers and Rayleigh–Taylor mixing. Some of these examples are discussed briefly. PMID:20211881

  13. Optical correlator using very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators

    NASA Technical Reports Server (NTRS)

    Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.

    1993-01-01

    The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.

  14. Low-Cost and Large-Area Electronics, Roll-to-Roll Processing and Beyond

    NASA Astrophysics Data System (ADS)

    Wiesenhütter, Katarzyna; Skorupa, Wolfgang

    In the following chapter, the authors conduct a literature survey of current advances in state-of-the-art low-cost, flexible electronics. A new emerging trend in the design of modern semiconductor devices dedicated to scaling-up, rather than reducing, their dimensions is presented. To realize volume manufacturing, alternative semiconductor materials with superior performance, fabricated by innovative processing methods, are essential. This review provides readers with a general overview of the material and technology evolution in the area of macroelectronics. Herein, the term macroelectronics (MEs) refers to electronic systems that can cover a large area of flexible media. In stark contrast to well-established micro- and nano-scale semiconductor devices, where property improvement is associated with downscaling the dimensions of the functional elements, in macroelectronic systems their overall size defines the ultimate performance (Sun and Rogers in Adv. Mater. 19:1897-1916, 2007). The major challenges of large-scale production are discussed. Particular attention has been focused on describing advanced, short-term heat treatment approaches, which offer a range of advantages compared to conventional annealing methods. There is no doubt that large-area, flexible electronic systems constitute an important research topic for the semiconductor industry. The ability to fabricate highly efficient macroelectronics by inexpensive processes will have a significant impact on a range of diverse technology sectors. A new era "towards semiconductor volume manufacturing…" has begun.

  15. Advanced Packaging for VLSI/VHSIC (Very Large Scale Integrated Circuits/Very High Speed Integrated Circuits) Applications: Electrical, Thermal, and Mechanical Considerations - An IR&D Report.

    DTIC Science & Technology

    1987-11-01

    developed that can be used by circuit engineers to extract the maximum performance from the devices on various board technologies including multilayer ceramic...Design guidelines have been developed that can be used by circuit engineers to extract the maxi- mum performance from the devices on various board...25 Attenuation and Dispersion Effects ......................................... 27 Skin Effect

  16. Highly nitrogen-doped carbon capsules: scalable preparation and high-performance applications in fuel cells and lithium ion batteries.

    PubMed

    Hu, Chuangang; Xiao, Ying; Zhao, Yang; Chen, Nan; Zhang, Zhipan; Cao, Minhua; Qu, Liangti

    2013-04-07

    Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g(-1) at a current density of 50 mA g(-1) after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries.

  17. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  18. A low-cost iron-cadmium redox flow battery for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.

    2016-10-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.

  19. Method for revealing biases in precision mass measurements

    NASA Astrophysics Data System (ADS)

    Vabson, V.; Vendt, R.; Kübarsepp, T.; Noorma, M.

    2013-02-01

    A practical method for the quantification of systematic errors of large-scale automatic comparators is presented. This method is based on a comparison of the performance of two different comparators. First, the differences of 16 equal partial loads of 1 kg are measured with a high-resolution mass comparator featuring insignificant bias and 1 kg maximum load. At the second stage, a large-scale comparator is tested by using combined loads with known mass differences. Comparing the different results, the biases of any comparator can be easily revealed. These large-scale comparator biases are determined over a 16-month period, and for the 1 kg loads, a typical pattern of biases in the range of ±0.4 mg is observed. The temperature differences recorded inside the comparator concurrently with mass measurements are found to remain within a range of ±30 mK, which obviously has a minor effect on the detected biases. Seasonal variations imply that the biases likely arise mainly due to the functioning of the environmental control at the measurement location.

  20. Demonstration-scale evaluation of a novel high-solids anaerobic digestion process for converting organic wastes to fuel gas and compost.

    PubMed

    Rivard, C J; Duff, B W; Dickow, J H; Wiles, C C; Nagle, N J; Gaddy, J L; Clausen, E C

    1998-01-01

    Early evaluations of the bioconversion potential for combined wastes such as tuna sludge and sorted municipal solid waste (MSW) were conducted at laboratory scale and compared conventional low-solids, stirred-tank anaerobic systems with the novel, high-solids anaerobic digester (HSAD) design. Enhanced feedstock conversion rates and yields were determined for the HSAD system. In addition, the HSAD system demonstrated superior resiliency to process failure. Utilizing relatively dry feedstocks, the HSAD system is approximately one-tenth the size of conventional low-solids systems. In addition, the HSAD system is capable of organic loading rates (OLRs) on the order of 20-25 g volatile solids per liter digester volume per d (gVS/L/d), roughly 4-5 times those of conventional systems. Current efforts involve developing a demonstration-scale (pilot-scale) HSAD system. A two-ton/d plant has been constructed in Stanton, CA and is currently in the commissioning/startup phase. The purposes of the project are to verify laboratory- and intermediate-scale process performance; test the performance of large-scale prototype mechanical systems; demonstrate the long-term reliability of the process; and generate the process and economic data required for the design, financing, and construction of full-scale commercial systems. This study presents conformational fermentation data obtained at intermediate-scale and a snapshot of the pilot-scale project.

  1. The Lhc Cryomagnet Supports in Glass-Fiber Reinforced Epoxy: a Large Scale Industrial Production with High Reproducibility in Performance

    NASA Astrophysics Data System (ADS)

    Poncet, A.; Struik, M.; Trigo, J.; Parma, V.

    2008-03-01

    The about 1700 LHC main ring super-conducting magnets are supported within their cryostats on 4700 low heat in leak column-type supports. The supports were designed to ensure a precise and stable positioning of the heavy dipole and quadrupole magnets while keeping thermal conduction heat loads within budget. A trade-off between mechanical and thermal properties, as well as cost considerations, led to the choice of glass fibre reinforced epoxy (GFRE). Resin Transfer Moulding (RTM), featuring a high level of automation and control, was the manufacturing process retained to ensure the reproducibility of the performance of the supports throughout the large production. The Spanish aerospace company EADS-CASA Espacio developed the specific RTM process, and produced the total quantity of supports between 2001 and 2004. This paper describes the development and the production of the supports, and presents the production experience and the achieved performance.

  2. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce

    PubMed Central

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2016-01-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325

  3. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    PubMed

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  5. A hybrid 2D/3D inspection concept with smart routing optimisation for high throughput, high dynamic range and traceable critical dimension metrology

    NASA Astrophysics Data System (ADS)

    Jones, Christopher W.; O’Connor, Daniel

    2018-07-01

    Dimensional surface metrology is required to enable advanced manufacturing process control for products such as large-area electronics, microfluidic structures, and light management films, where performance is determined by micrometre-scale geometry or roughness formed over metre-scale substrates. While able to perform 100% inspection at a low cost, commonly used 2D machine vision systems are insufficient to assess all of the functionally relevant critical dimensions in such 3D products on their own. While current high-resolution 3D metrology systems are able to assess these critical dimensions, they have a relatively small field of view and are thus much too slow to keep up with full production speeds. A hybrid 2D/3D inspection concept is demonstrated, combining a small field of view, high-performance 3D topography-measuring instrument with a large field of view, high-throughput 2D machine vision system. In this concept, the location of critical dimensions and defects are first registered using the 2D system, then smart routing algorithms and high dynamic range (HDR) measurement strategies are used to efficiently acquire local topography using the 3D sensor. A motion control platform with a traceable position referencing system is used to recreate various sheet-to-sheet and roll-to-roll inline metrology scenarios. We present the artefacts and procedures used to calibrate this hybrid sensor system for traceable dimensional measurement, as well as exemplar measurement of optically challenging industrial test structures.

  6. Field of genes: using Apache Kafka as a bioinformatic data repository.

    PubMed

    Lawlor, Brendan; Lynch, Richard; Mac Aogáin, Micheál; Walsh, Paul

    2018-04-01

    Bioinformatic research is increasingly dependent on large-scale datasets, accessed either from private or public repositories. An example of a public repository is National Center for Biotechnology Information's (NCBI's) Reference Sequence (RefSeq). These repositories must decide in what form to make their data available. Unstructured data can be put to almost any use but are limited in how access to them can be scaled. Highly structured data offer improved performance for specific algorithms but limit the wider usefulness of the data. We present an alternative: lightly structured data stored in Apache Kafka in a way that is amenable to parallel access and streamed processing, including subsequent transformations into more highly structured representations. We contend that this approach could provide a flexible and powerful nexus of bioinformatic data, bridging the gap between low structure on one hand, and high performance and scale on the other. To demonstrate this, we present a proof-of-concept version of NCBI's RefSeq database using this technology. We measure the performance and scalability characteristics of this alternative with respect to flat files. The proof of concept scales almost linearly as more compute nodes are added, outperforming the standard approach using files. Apache Kafka merits consideration as a fast and more scalable but general-purpose way to store and retrieve bioinformatic data, for public, centralized reference datasets such as RefSeq and for private clinical and experimental data.

  7. No Country Left Behind: Rhetoric and Reality of International Large-Scale Assessment. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Feuer, Michael J.

    2011-01-01

    Few arguments about education are as effective at galvanizing public attention and motivating political action as those that compare the performance of students with their counterparts in other countries and that connect academic achievement to economic performance. Because data from international large-scale assessments (ILSA) have a powerful…

  8. InP nanopore arrays for photoelectrochemical hydrogen generation.

    PubMed

    Li, Qiang; Zheng, Maojun; Zhang, Bin; Zhu, Changqing; Wang, Faze; Song, Jingnan; Zhong, Miao; Ma, Li; Shen, Wenzhong

    2016-02-19

    We report a facile and large-scale fabrication of highly ordered one-dimensional (1D) indium phosphide (InP) nanopore arrays (NPs) and their application as photoelectrodes for photoelectrochemical (PEC) hydrogen production. These InP NPs exhibit superior PEC performance due to their excellent light-trapping characteristics, high-quality 1D conducting channels and large surface areas. The photocurrent density of optimized InP NPs is 8.9 times higher than that of planar counterpart at an applied potential of +0.3 V versus RHE under AM 1.5G illumination (100 mW cm(-2)). In addition, the onset potential of InP NPs exhibits 105 mV of cathodic shift relative to planar control. The superior performance of the nanoporous samples is further explained by Mott-Schottky and electrochemical impedance spectroscopy ananlysis.

  9. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  10. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  11. Scalable nuclear density functional theory with Sky3D

    NASA Astrophysics Data System (ADS)

    Afibuzzaman, Md; Schuetrumpf, Bastian; Aktulga, Hasan Metin

    2018-02-01

    In nuclear astrophysics, quantum simulations of large inhomogeneous dense systems as they appear in the crusts of neutron stars present big challenges. The number of particles in a simulation with periodic boundary conditions is strongly limited due to the immense computational cost of the quantum methods. In this paper, we describe techniques for an efficient and scalable parallel implementation of Sky3D, a nuclear density functional theory solver that operates on an equidistant grid. Presented techniques allow Sky3D to achieve good scaling and high performance on a large number of cores, as demonstrated through detailed performance analysis on a Cray XC40 supercomputer.

  12. More robust regional precipitation projection from selected CMIP5 models based on multiple-dimensional metrics

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, L.; Leung, L. R.; Lin, G.; Lu, J.; Gao, Y.; Zhang, Y.

    2017-12-01

    Projecting precipitation changes is challenging because of incomplete understanding of the climate system and biases and uncertainty in climate models. In East Asia where summer precipitation is dominantly influenced by the monsoon circulation and the global models from Coupled Model Intercomparison Project Phase 5 (CMIP5), however, give various projection of precipitation change for 21th century. It is critical for community to know which models' projection are more reliable in response to natural and anthropogenic forcings. In this study we defined multiple-dimensional metrics, measuring the model performance in simulating the present-day of large-scale circulation, regional precipitation and relationship between them. The large-scale circulation features examined in this study include the lower tropospheric southwesterly winds, the western North Pacific subtropical high, the South China Sea Subtropical High, and the East Asian westerly jet in the upper troposphere. Each of these circulation features transport moisture to East Asia, enhancing the moist static energy and strengthening the Meiyu moisture front that is the primary mechanism for precipitation generation in eastern China. Based on these metrics, 30 models in CMIP5 ensemble are classified into three groups. Models in the top performing group projected regional precipitation patterns that are more similar to each other than the bottom or middle performing group and consistently projected statistically significant increasing trends in two of the large-scale circulation indices and precipitation. In contrast, models in the bottom or middle performing group projected small drying or no trends in precipitation. We also find the models that only reasonably reproduce the observed precipitation climatology does not guarantee more reliable projection of future precipitation because good simulation skill could be achieved through compensating errors from multiple sources. Herein the potential for more robust projections of precipitation changes at regional scale is demonstrated through the use of discriminating metric to subsample the multi-model ensemble. The results from this study provides insights for how to select models from CMIP ensemble to project regional climate and hydrological cycle changes.

  13. Synthesis of Large-area Crystalline MoTe2 Atomic layer from Chemical Vapor Deposition

    NASA Astrophysics Data System (ADS)

    Zhou, Lin; Zubair, Ahmad; Xu, Kai; Kong, Jing; Dresselhaus, Mildred

    The controlled synthesis of highly crystalline large-area molybdenum ditelluride MoTe2 atomic layers is crucial for the practical applications of this emerging material. Here we develop a novel approach for the growth of large-area, uniform and highly crystalline few-layer MoTe2 film via chemical vapour deposition (CVD). Large-area atomically thin MoTe2 film has been successfully synthesized by tellurization of a MoO3 film. The as-grown MoTe2 film is uniform, stoichiometric, and highly crystalline. As a result of the high crystallinity, the electronic properties of MoTe2 film are comparable with that of mechanically exfoliated MoTe2 flakes. Moreover, we found that two different phases of MoTe2 (2H and 1T') can be grown depending on the choice of Mo precursor. Since the MoTe2 film is highly homogenous, and the size of the film is only limited by the substrate and CVD system size, our growth method paves the way for large-scale application of MoTe2 in high performance nanoelectronics and optoelectronics.

  14. Enhancing the Spectral Hardening of Cosmic TeV Photons by Mixing with Axionlike Particles in the Magnetized Cosmic Web.

    PubMed

    Montanino, Daniele; Vazza, Franco; Mirizzi, Alessandro; Viel, Matteo

    2017-09-08

    Large-scale extragalactic magnetic fields may induce conversions between very-high-energy photons and axionlike particles (ALPs), thereby shielding the photons from absorption on the extragalactic background light. However, in simplified "cell" models, used so far to represent extragalactic magnetic fields, this mechanism would be strongly suppressed by current astrophysical bounds. Here we consider a recent model of extragalactic magnetic fields obtained from large-scale cosmological simulations. Such simulated magnetic fields would have large enhancement in the filaments of matter. As a result, photon-ALP conversions would produce a significant spectral hardening for cosmic TeV photons. This effect would be probed with the upcoming Cherenkov Telescope Array detector. This possible detection would give a unique chance to perform a tomography of the magnetized cosmic web with ALPs.

  15. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  16. Physical and Numerical Model Studies of Cross-flow Turbines Towards Accurate Parameterization in Array Simulations

    NASA Astrophysics Data System (ADS)

    Wosnik, M.; Bachant, P.

    2014-12-01

    Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of this work will be a cross-flow turbine actuator line model to be used as an extension to the OpenFOAM computational fluid dynamics (CFD) software framework, which will likely require modifications to commonly-used dynamic stall models, in consideration of the turbines' high angle of attack excursions during normal operation.

  17. Hardware Architectures for Data-Intensive Computing Problems: A Case Study for String Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data, which needs to be matched against exponentially growing databases of known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems alsomore » include heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variability, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. In this paper, we discuss the implementation of the Aho-Corasick algorithm for GPU-accelerated high performance systems. We present an optimized implementation of Aho-Corasick for GPUs and discuss its tradeoffs on the Tesla T10 and he new Tesla T20 (codename Fermi) GPUs. We then integrate the optimized GPU code, respectively, in a MPI-based and in a pthreads-based load balancer to enable execution of the algorithm on clusters and large sharedmemory multiprocessors (SMPs) accelerated with multiple GPUs.« less

  18. One-step synthesis of large-scale graphene film doped with gold nanoparticles at liquid-air interface for electrochemistry and Raman detection applications.

    PubMed

    Zhang, Panpan; Huang, Ying; Lu, Xin; Zhang, Siyu; Li, Jingfeng; Wei, Gang; Su, Zhiqiang

    2014-07-29

    We demonstrated a facile one-step synthesis strategy for the preparation of a large-scale reduced graphene oxide multilayered film doped with gold nanoparticles (RGO/AuNP film) and applied this film as functional nanomaterials for electrochemistry and Raman detection applications. The related applications of the fabricated RGO/AuNP film in electrochemical nonenzymatic H2O2 biosensor, electrochemical oxygen reduction reaction (ORR), and surface-enhanced Raman scattering (SERS) detection were investigated. Electrochemical data indicate that the H2O2 biosensor fabricated by RGO/AuNP film shows a wide linear range, low limitation of detection, high selectivity, and long-term stability. In addition, it was proved that the created RGO/AuNP film also exhibits excellent ORR electrochemical catalysis performance. The created RGO/AuNP film, when serving as SERS biodetection platform, presents outstanding performances in detecting 4-aminothiophenol with an enhancement factor of approximately 5.6 × 10(5) as well as 2-thiouracil sensing with a low concentration to 1 μM. It is expected that this facile strategy for fabricating large-scale graphene film doped with metallic nanoparticles will spark inspirations in preparing functional nanomaterials and further extend their applications in drug delivery, wastewater purification, and bioenergy.

  19. A Bayesian Nonparametric Approach to Image Super-Resolution.

    PubMed

    Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid

    2015-02-01

    Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.

  20. Results of Long Term Life Tests of Large Scale Lithium-Ion Cells

    NASA Astrophysics Data System (ADS)

    Inoue, Takefumi; Imamura, Nobutaka; Miyanaga, Naozumi; Yoshida, Hiroaki; Komada, Kanemi

    2008-09-01

    High energy density Li-ion cells have been introduced to latest satellites and another space usage. We have started development of large scale Li-ion cells for space applications in 1997. The chemical design was fixed in 1999.It is very important to confirm life performance to apply satellite applications because it requires long mission life such as 15 years for GEO and 5 to 7 years for LEO. Therefore we started life test at various conditions. And the tests have reached 8 to 9 years in actual calendar time. Semi - accelerated GEO tests which gives both calendar and cycle loss have been reached 42 season that corresponds 21 years in orbit. The specific energy range is 120 - 130 Wh/kg at EOL. According to the test results, we have confirmed that our Li-ion cell meets general requirements for space application such as GEO and LEO with quite high specific energy.

  1. Eco-friendly Energy Storage System: Seawater and Ionic Liquid Electrolyte.

    PubMed

    Kim, Jae-Kwang; Mueller, Franziska; Kim, Hyojin; Jeong, Sangsik; Park, Jeong-Sun; Passerini, Stefano; Kim, Youngsik

    2016-01-08

    As existing battery technologies struggle to meet the requirements for widespread use in the field of large-scale energy storage, novel concepts are urgently needed concerning batteries that have high energy densities, low costs, and high levels of safety. Here, a novel eco-friendly energy storage system (ESS) using seawater and an ionic liquid is proposed for the first time; this represents an intermediate system between a battery and a fuel cell, and is accordingly referred to as a hybrid rechargeable cell. Compared to conventional organic electrolytes, the ionic liquid electrolyte significantly enhances the cycle performance of the seawater hybrid rechargeable system, acting as a very stable interface layer between the Sn-C (Na storage) anode and the NASICON (Na3 Zr2 Si2 PO12) ceramic solid electrolyte, making this system extremely promising for cost-efficient and environmentally friendly large-scale energy storage. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A networked voting rule for democratic representation

    NASA Astrophysics Data System (ADS)

    Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir

    2018-03-01

    We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.

  3. Energy transfer, pressure tensor, and heating of kinetic plasma

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Matthaeus, William H.; Parashar, Tulasi N.; Haggerty, Colby C.; Roytershteyn, Vadim; Daughton, William; Wan, Minping; Shi, Yipeng; Chen, Shiyi

    2017-07-01

    Kinetic plasma turbulence cascade spans multiple scales ranging from macroscopic fluid flow to sub-electron scales. Mechanisms that dissipate large scale energy, terminate the inertial range cascade, and convert kinetic energy into heat are hotly debated. Here, we revisit these puzzles using fully kinetic simulation. By performing scale-dependent spatial filtering on the Vlasov equation, we extract information at prescribed scales and introduce several energy transfer functions. This approach allows highly inhomogeneous energy cascade to be quantified as it proceeds down to kinetic scales. The pressure work, - ( P . ∇ ) . u , can trigger a channel of the energy conversion between fluid flow and random motions, which contains a collision-free generalization of the viscous dissipation in collisional fluid. Both the energy transfer and the pressure work are strongly correlated with velocity gradients.

  4. A 32-bit NMOS microprocessor with a large register file

    NASA Astrophysics Data System (ADS)

    Sherburne, R. W., Jr.; Katevenis, M. G. H.; Patterson, D. A.; Sequin, C. H.

    1984-10-01

    Two scaled versions of a 32-bit NMOS reduced instruction set computer CPU, called RISC II, have been implemented on two different processing lines using the simple Mead and Conway layout rules with lambda values of 2 and 1.5 microns (corresponding to drawn gate lengths of 4 and 3 microns), respectively. The design utilizes a small set of simple instructions in conjunction with a large register file in order to provide high performance. This approach has resulted in two surprisingly powerful single-chip processors.

  5. Highly efficient production of rare sugars D-psicose and L-tagatose by two engineered D-tagatose epimerases.

    PubMed

    Bosshart, Andreas; Wagner, Nina; Lei, Lei; Panke, Sven; Bechtold, Matthias

    2016-02-01

    Rare sugars are monosaccharides that do not occur in nature in large amounts. However, many of them demonstrate high potential as low-calorie sweetener, chiral building blocks or active pharmaceutical ingredients. Their production by enzymatic means from broadly abundant epimers is an attractive alternative to synthesis by traditional organic chemical means, but often suffers from low space-time yields and high enzyme costs due to rapid enzyme degradation. Here we describe the detailed characterization of two variants of d-tagatose epimerase under operational conditions that were engineered for high stability and high catalytic activity towards the epimerization of d-fructose to d-psicose and l-sorbose to l-tagatose, respectively. A variant optimized for the production of d-psicose showed a very high total turnover number (TTN) of up to 10(8) catalytic events over a catalyst's lifetime, determined under operational conditions at high temperatures in an enzyme-membrane reactor (EMR). Maximum space-time yields as high as 10.6 kg L(-1) d(-1) were obtained with a small laboratory-scale EMR, indicating excellent performance. A variant optimized for the production of l-tagatose performed less stable in the same setting, but still showed a very good TTN of 5.8 × 10(5) and space-time yields of up to 478 g L(-1) d(-1) . Together, these results confirm that large-scale enzymatic access to rare sugars is feasible. © 2015 Wiley Periodicals, Inc.

  6. Development and Validation of High Precision Thermal, Mechanical, and Optical Models for the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles

    2006-01-01

    SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.

  7. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  8. Application of multivariate analysis and mass transfer principles for refinement of a 3-L bioreactor scale-down model--when shake flasks mimic 15,000-L bioreactors better.

    PubMed

    Ahuja, Sanjeev; Jain, Shilpa; Ram, Kripa

    2015-01-01

    Characterization of manufacturing processes is key to understanding the effects of process parameters on process performance and product quality. These studies are generally conducted using small-scale model systems. Because of the importance of the results derived from these studies, the small-scale model should be predictive of large scale. Typically, small-scale bioreactors, which are considered superior to shake flasks in simulating large-scale bioreactors, are used as the scale-down models for characterizing mammalian cell culture processes. In this article, we describe a case study where a cell culture unit operation in bioreactors using one-sided pH control and their satellites (small-scale runs conducted using the same post-inoculation cultures and nutrient feeds) in 3-L bioreactors and shake flasks indicated that shake flasks mimicked the large-scale performance better than 3-L bioreactors. We detail here how multivariate analysis was used to make the pertinent assessment and to generate the hypothesis for refining the existing 3-L scale-down model. Relevant statistical techniques such as principal component analysis, partial least square, orthogonal partial least square, and discriminant analysis were used to identify the outliers and to determine the discriminatory variables responsible for performance differences at different scales. The resulting analysis, in combination with mass transfer principles, led to the hypothesis that observed similarities between 15,000-L and shake flask runs, and differences between 15,000-L and 3-L runs, were due to pCO2 and pH values. This hypothesis was confirmed by changing the aeration strategy at 3-L scale. By reducing the initial sparge rate in 3-L bioreactor, process performance and product quality data moved closer to that of large scale. © 2015 American Institute of Chemical Engineers.

  9. Investigating the ion-scale spectral break of solar wind turbulence from low to high plasma beta with high-resolution hybrid simulations

    NASA Astrophysics Data System (ADS)

    Franci, Luca; Landi, Simone; Matteini, Lorenzo; Verdini, Andrea; Hellinger, Petr

    2016-04-01

    We investigate the properties of the ion-scale spectral break of solar wind turbulence by means of two-dimensional, large-scale, high-resolution hybrid particle-in-cell simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we add a spectrum of in-plane large- scale magnetic and kinetic fluctuations, with energy equipartition and vanishing correlation. We perform a set of ten simulations with different values of the ion plasma beta, β_i. In all cases, we observe the power spectrum of the total magnetic fluctuations following a power law with a spectral index of -5/3 in the inertial range, with a smooth break around ion scales and a steeper power law in the sub-ion range. This spectral break always occurs at spatial scales of the order of the proton gyroradius, ρ_i, and the proton inertial length, di = ρi / √{β_i}. When the plasma beta is of the order of 1, the two scales are very close to each other and determining which is directly related to the steepening of the spectra it's not straightforward at all. In order to overcome this limitation, we extended the range of values of βi over three orders of magnitude, from 0.01 to 10, so that the two ion scales were well separated. This let us observe that the break always seems to occur at the larger of the two scales, i.e., at di for βi 1. The effect of βi on the spectra of the parallel and perpendicular magnetic components separately and of the density fluctuations is also investigated. We compare all our numerical results with solar wind observations and suggest possible explanations for our findings.

  10. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing.

    PubMed

    Midekisa, Alemayehu; Holl, Felix; Savory, David J; Andrade-Pacheco, Ricardo; Gething, Peter W; Bennett, Adam; Sturrock, Hugh J W

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth's land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources.

  11. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing

    PubMed Central

    Holl, Felix; Savory, David J.; Andrade-Pacheco, Ricardo; Gething, Peter W.; Bennett, Adam; Sturrock, Hugh J. W.

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth’s land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources. PMID:28953943

  12. A small-gap electrostatic micro-actuator for large deflections

    PubMed Central

    Conrad, Holger; Schenk, Harald; Kaiser, Bert; Langa, Sergiu; Gaudet, Matthieu; Schimmanz, Klaus; Stolz, Michael; Lenz, Miriam

    2015-01-01

    Common quasi-static electrostatic micro actuators have significant limitations in deflection due to electrode separation and unstable drive regions. State-of-the-art electrostatic actuators achieve maximum deflections of approximately one third of the electrode separation. Large electrode separation and high driving voltages are normally required to achieve large actuator movements. Here we report on an electrostatic actuator class, fabricated in a CMOS-compatible process, which allows high deflections with small electrode separation. The concept presented makes the huge electrostatic forces within nanometre small electrode separation accessible for large deflections. Electrostatic actuations that are larger than the electrode separation were measured. An analytical theory is compared with measurement and simulation results and enables closer understanding of these actuators. The scaling behaviour discussed indicates significant future improvement on actuator deflection. The presented driving concept enables the investigation and development of novel micro systems with a high potential for improved device and system performance. PMID:26655557

  13. Phthalimide Copolymer Solar Cells

    NASA Astrophysics Data System (ADS)

    Xin, Hao; Guo, Xugang; Ren, Guoqiang; Kim, Felix; Watson, Mark; Jenekhe, Samson

    2010-03-01

    Photovoltaic properties of bulk heterojunction solar cells based on phthalimide donor-acceptor copolymers have been investigated. Due to the strong π-π stacking of the polymers, the state-of-the-art thermal annealing approach resulted in micro-scale phase separation and thus negligible photocurrent. To achieve ideal bicontinuous morphology, different strategies including quickly film drying and mixed solvent for film processing have been explored. In these films, nano-sale phase separation was achieved and a power conversion efficiency of 3.0% was obtained. Absorption and space-charge limited current mobility measurements reveal similar light harvesting and hole mobilities in all the films, indicating that the morphology is the dominant factor determining the photovoltaic performance. Our results demonstrate that for highly crystalline and/or low-solubility polymers, finding a way to prevent polymer aggregation and large scale phase separation is critical to realizing high performance solar cells.

  14. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF-extension phase.

    DOT National Transportation Integrated Search

    2015-03-01

    A large experimental program, consisting of the design, construction, curing, exposure, and structural load : testing of 16 large-scale column specimens with a critical lap splice region that were influenced by varying : stages of alkali-silica react...

  15. Shear-driven dynamo waves at high magnetic Reynolds number.

    PubMed

    Tobias, S M; Cattaneo, F

    2013-05-23

    Astrophysical magnetic fields often display remarkable organization, despite being generated by dynamo action driven by turbulent flows at high conductivity. An example is the eleven-year solar cycle, which shows spatial coherence over the entire solar surface. The difficulty in understanding the emergence of this large-scale organization is that whereas at low conductivity (measured by the magnetic Reynolds number, Rm) dynamo fields are well organized, at high Rm their structure is dominated by rapidly varying small-scale fluctuations. This arises because the smallest scales have the highest rate of strain, and can amplify magnetic field most efficiently. Therefore most of the effort to find flows whose large-scale dynamo properties persist at high Rm has been frustrated. Here we report high-resolution simulations of a dynamo that can generate organized fields at high Rm; indeed, the generation mechanism, which involves the interaction between helical flows and shear, only becomes effective at large Rm. The shear does not enhance generation at large scales, as is commonly thought; instead it reduces generation at small scales. The solution consists of propagating dynamo waves, whose existence was postulated more than 60 years ago and which have since been used to model the solar cycle.

  16. Large scale production of highly-qualified graphene by ultrasonic exfoliation of expanded graphite under the promotion of (NH4)2CO3 decomposition.

    PubMed

    Wang, Yunwei; Tong, Xili; Guo, Xiaoning; Wang, Yingyong; Jin, Guoqiang; Guo, Xiangyun

    2013-11-29

    Highly-qualified graphene was prepared by the ultrasonic exfoliation of commercial expanded graphite (EG) under the promotion of (NH4)2CO3 decomposition. The yield of graphene from the first exfoliation is 7 wt%, and it can be increased to more than 65 wt% by repeated exfoliations. Atomic force microscopy, x-ray photoelectron spectroscopy and Raman analysis show that the as-prepared graphene only has a few defects or oxides, and more than 95% of the graphene flakes have a thickness of ~1 nm. The electrochemical performance of the as-prepared graphene is comparable to reduced graphene oxide in the determination of dopamine (DA) from the mixed solution of ascorbic acid, uric acid and DA. These results show that the decomposition of (NH4)2CO3 molecules in the EG layers under ultrasonication promotes the exfoliation of graphite and provides a low-priced route for large scale production of highly-quality graphene.

  17. Large scale production of highly-qualified graphene by ultrasonic exfoliation of expanded graphite under the promotion of (NH4)2CO3 decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Yunwei; Tong, Xili; Guo, Xiaoning; Wang, Yingyong; Jin, Guoqiang; Guo, Xiangyun

    2013-11-01

    Highly-qualified graphene was prepared by the ultrasonic exfoliation of commercial expanded graphite (EG) under the promotion of (NH4)2CO3 decomposition. The yield of graphene from the first exfoliation is 7 wt%, and it can be increased to more than 65 wt% by repeated exfoliations. Atomic force microscopy, x-ray photoelectron spectroscopy and Raman analysis show that the as-prepared graphene only has a few defects or oxides, and more than 95% of the graphene flakes have a thickness of ˜1 nm. The electrochemical performance of the as-prepared graphene is comparable to reduced graphene oxide in the determination of dopamine (DA) from the mixed solution of ascorbic acid, uric acid and DA. These results show that the decomposition of (NH4)2CO3 molecules in the EG layers under ultrasonication promotes the exfoliation of graphite and provides a low-priced route for large scale production of highly-quality graphene.

  18. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Large scale modulation of high frequency acoustic waves in periodic porous media.

    PubMed

    Boutin, Claude; Rallu, Antoine; Hans, Stephane

    2012-12-01

    This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.

  20. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  1. Behavior of a high-temperature superconducting conductor on a round core cable at current ramp rates as high as 67.8 kA s-1 in background fields of up to 19 T

    NASA Astrophysics Data System (ADS)

    Michael, P. C.; Bromberg, L.; van der Laan, D. C.; Noyes, P.; Weijers, H. W.

    2016-04-01

    High temperature superconducting (HTS) conductor-on-round-core (CORC®) cables have been developed for use in power transmission systems and large high-field magnets. The use of high-current conductors for large-scale magnets reduces system inductance and limits the peak voltage needed for ramped field operation. A CORC® cable contains a large number of RE-Ba2Cu3O7-δ (RE = rare earth) (REBCO) coated conductors, helically wound in multiple layers on a thin, round former. Large-scale applications, such as fusion and accelerator magnets, require current ramp rates of several kilo-Amperes per second during pulsed operation. This paper presents results that demonstrate the electromagnetic stability of a CORC® cable during transient conditions. Measurements were performed at 4.2 K using a 1.55 m long CORC® cable in background fields of up to 19 T. Repeated current pulses in a background field of 19 T at current ramp rates of up to 67.8 kA s-1 to approximately 90% of the cable’s quench current at that field, did not show any sign of degradation in cable performance due to excessive ac loss or electromagnetic instability. The very high current ramp rates applied during these tests were used to compensate, to the extent possible, the limited cable length accommodated by the test facility, assuming that the measured results could be extrapolated to longer length cables operated at proportionally lower current ramp rates. No shift of the superconducting transition to lower current was measured when the current ramp rate was increased from 25 A s-1 to 67.8 kA s-1. These results demonstrate the viability of CORC® cables for use in low-inductance magnets that operate at moderate to high current ramp rates.

  2. 18/20 T high magnetic field scanning tunneling microscope with fully low voltage operability, high current resolution, and large scale searching ability.

    PubMed

    Li, Quanfeng; Wang, Qi; Hou, Yubin; Lu, Qingyou

    2012-04-01

    We present a home-built 18/20 T high magnetic field scanning tunneling microscope (STM) featuring fully low voltage (lower than ±15 V) operability in low temperatures, large scale searching ability, and 20 fA high current resolution (measured by using a 100 GOhm dummy resistor to replace the tip-sample junction) with a bandwidth of 3.03 kHz. To accomplish low voltage operation which is important in achieving high precision, low noise, and low interference with the strong magnetic field, the coarse approach is implemented with an inertial slider driven by the lateral bending of a piezoelectric scanner tube (PST) whose inner electrode is axially split into two for enhanced bending per volt. The PST can also drive the same sliding piece to inertial slide in the other bending direction (along the sample surface) of the PST, which realizes the large area searching ability. The STM head is housed in a three segment tubular chamber, which is detachable near the STM head for the convenience of sample and tip changes. Atomic resolution images of a graphite sample taken under 17.6 T and 18.0001 T are presented to show its performance. © 2012 American Institute of Physics

  3. An immersed boundary method for direct and large eddy simulation of stratified flows in complex geometry

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha R.; Sarkar, Sutanu

    2016-10-01

    A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.

  4. Towards High-Performance Aqueous Sodium-Ion Batteries: Stabilizing the Solid/Liquid Interface for NASICON-Type Na2 VTi(PO4 )3 using Concentrated Electrolytes.

    PubMed

    Zhang, Huang; Jeong, Sangsik; Qin, Bingsheng; Vieira Carvalho, Diogo; Buchholz, Daniel; Passerini, Stefano

    2018-04-25

    Aqueous Na-ion batteries may offer a solution to the cost and safety issues of high-energy batteries. However, substantial challenges remain in the development of electrode materials and electrolytes enabling high performance and long cycle life. Herein, we report the characterization of a symmetric Na-ion battery with a NASICON-type Na 2 VTi(PO 4 ) 3 electrode material in conventional aqueous and "water-in-salt" electrolytes. Extremely stable cycling performance for 1000 cycles at a high rate (20 C) is found with the highly concentrated aqueous electrolytes owing to the formation of a resistive but protective interphase between the electrode and electrolyte. These results provide important insight for the development of aqueous Na-ion batteries with stable long-term cycling performance for large-scale energy storage. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Fabrication of ordered NiO coated Si nanowire array films as electrodes for a high performance lithium ion battery.

    PubMed

    Qiu, M C; Yang, L W; Qi, X; Li, Jun; Zhong, J X

    2010-12-01

    Highly ordered NiO coated Si nanowire array films are fabricated as electrodes for a high performance lithium ion battery via depositing Ni on electroless-etched Si nanowires and subsequently annealing. The structures and morphologies of as-prepared films are characterized by X-ray diffraction, scanning electron microscopy, and transmission electron microscopy. When the potential window versus lithium was controlled, the coated NiO can be selected to be electrochemically active to store and release Li+ ions, while highly conductive crystalline Si cores function as nothing more than a stable mechanical support and an efficient electrical conducting pathway. The hybrid nanowire array films exhibit superior cyclic stability and reversible capacity compared to that of NiO nanostructured films. Owing to the ease of large-scale fabrication and superior electrochemical performance, these hybrid nanowire array films will be promising anode materials for high performance lithium-ion batteries.

  6. A dual-scale metal nanowire network transparent conductor for highly efficient and flexible organic light emitting diodes.

    PubMed

    Lee, Jinhwan; An, Kunsik; Won, Phillip; Ka, Yoonseok; Hwang, Hyejin; Moon, Hyunjin; Kwon, Yongwon; Hong, Sukjoon; Kim, Changsoon; Lee, Changhee; Ko, Seung Hwan

    2017-02-02

    Although solution processed metal nanowire (NW) percolation networks are a strong candidate to replace commercial indium tin oxide, their performance is limited in thin film device applications due to reduced effective electrical areas arising from the dimple structure and percolative voids that single size metal NW percolation networks inevitably possess. Here, we present a transparent electrode based on a dual-scale silver nanowire (AgNW) percolation network embedded in a flexible substrate to demonstrate a significant enhancement in the effective electrical area by filling the large percolative voids present in a long/thick AgNW network with short/thin AgNWs. As a proof of concept, the performance enhancement of a flexible phosphorescent OLED is demonstrated with the dual-scale AgNW percolation network compared to the previous mono-scale AgNWs. Moreover, we report that mechanical and oxidative robustness, which are critical for flexible OLEDs, are greatly increased by embedding the dual-scale AgNW network in a resin layer.

  7. How "Boundaryless" Are the Careers of High Potentials, Key Experts and Average Performers?

    ERIC Educational Resources Information Center

    Dries, Nicky; Van Acker, Frederik; Verbruggen, Marijke

    2012-01-01

    The talent management literature declares talent management a prime concern for HRM professionals while the careers literature calls talent management archaic. Three sets of assumptions identified through comparative review of both streams of the literature were tested in a large-scale survey (n = 941). We found more support for the assumptions…

  8. Technical Assessment: Integrated Photonics

    DTIC Science & Technology

    2015-10-01

    in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of

  9. Screening for Language Delay: Growth Trajectories of Language Ability in Low- and High-Performing Children

    ERIC Educational Resources Information Center

    Klem, Marianne; Hagtvet, Bente; Hulme, Charles; Gustafsson, Jan-Eric

    2016-01-01

    Purpose: This study investigated the stability and growth of preschool language skills and explores latent class analysis as an approach for identifying children at risk of language impairment. Method: The authors present data from a large-scale 2-year longitudinal study, in which 600 children were assessed with a language-screening tool…

  10. Towards Cloud-Resolving European-Scale Climate Simulations using a fully GPU-enabled Prototype of the COSMO Regional Model

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph

    2014-05-01

    The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs. We present our redesign and porting approach as well as our experience and lessons learned. Furthermore, we discuss relevant performance benchmarks obtained on the new hybrid Cray XC30 system "Piz Daint" installed at the Swiss National Supercomputing Centre (CSCS), both in terms of time-to-solution as well as energy consumption. We will demonstrate a first set of short cloud-resolving climate simulations at the European-scale using the GPU-enabled COSMO prototype and elaborate our future plans on how to exploit this new model capability.

  11. Bacterial-cellulose-derived carbon nanofiber@MnO₂ and nitrogen-doped carbon nanofiber electrode materials: an asymmetric supercapacitor with high energy and power density.

    PubMed

    Chen, Li-Feng; Huang, Zhi-Hong; Liang, Hai-Wei; Guan, Qing-Fang; Yu, Shu-Hong

    2013-09-14

    A new kind of high-performance asymmetric supercapacitor is designed with pyrolyzed bacterial cellulose (p-BC)-coated MnO₂ as a positive electrode material and nitrogen-doped p-BC as a negative electrode material via an easy, efficient, large-scale, and green fabrication approach. The optimal asymmetric device possesses an excellent supercapacitive behavior with quite high energy and power density. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Supporting observation campaigns with high resolution modeling

    NASA Astrophysics Data System (ADS)

    Klocke, Daniel; Brueck, Matthias; Voigt, Aiko

    2017-04-01

    High resolution simulation in support of measurement campaigns offers a promising and emerging way to create large-scale context for small-scale observations of clouds and precipitation processes. As these simulation include the coupling of measured small-scale processes with the circulation, they also help to integrate the research communities from modeling and observations and allow for detailed model evaluations against dedicated observations. In connection with the measurement campaign NARVAL (August 2016 and December 2013) simulations with a grid-spacing of 2.5 km for the tropical Atlantic region (9000x3300 km), with local refinement to 1.2 km for the western part of the domain, were performed using the icosahedral non-hydrostatic (ICON) general circulation model. These simulations are again used to drive large eddy resolving simulations with the same model for selected days in the high definition clouds and precipitation for advancing climate prediction (HD(CP)2) project. The simulations are presented with the focus on selected results showing the benefit for the scientific communities doing atmospheric measurements and numerical modeling of climate and weather. Additionally, an outlook will be given on how similar simulations will support the NAWDEX measurement campaign in the North Atlantic and AC3 measurement campaign in the Arctic.

  13. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    PubMed

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  14. Nested high-resolution large-eddy simulations in WRF to support wind power

    NASA Astrophysics Data System (ADS)

    Mirocha, J.; Kirkil, G.; Kosovic, B.; Lundquist, J. K.

    2009-12-01

    The WRF model’s grid nesting capability provides a potentially powerful framework for simulating flow over a wide range of scales. One such application is computation of realistic inflow boundary conditions for large eddy simulations (LES) by nesting LES domains within mesoscale domains. While nesting has been widely and successfully applied at GCM to mesoscale resolutions, the WRF model’s nesting behavior at the high-resolution (Δx < 1000m) end of the spectrum is less well understood. Nesting LES within msoscale domains can significantly improve turbulent flow prediction at the scale of a wind park, providing a basis for superior site characterization, or for improved simulation of turbulent inflows encountered by turbines. We investigate WRF’s grid nesting capability at high mesh resolutions using nested mesoscale and large-eddy simulations. We examine the spatial scales required for flow structures to equilibrate to the finer mesh as flow enters a nest, and how the process depends on several parameters, including grid resolution, turbulence subfilter stress models, relaxation zones at nest interfaces, flow velocities, surface roughnesses, terrain complexity and atmospheric stability. Guidance on appropriate domain sizes and turbulence models for LES in light of these results is provided This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 LLNL-ABS-416482

  15. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  16. Large-scale Electrophysiology: Acquisition, Compression, Encryption, and Storage of Big Data

    PubMed Central

    Brinkmann, Benjamin H.; Bower, Mark R.; Stengel, Keith A.; Worrell, Gregory A.; Stead, Matt

    2009-01-01

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32 kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single neuron action potentials, high frequency oscillations, and high amplitude ultraslow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information. PMID:19427545

  17. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  18. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    PubMed

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  19. Resolving the Circumstellar Environment of the Galactic B[e] Supergiant Star MWC 137 from Large to Small Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraus, Michaela; Nickeler, Dieter H.; Liimets, Tiina

    The Galactic object MWC 137 has been suggested to belong to the group of B[e] supergiants. However, with its large-scale optical bipolar ring nebula and high-velocity jet and knots, it is a rather atypical representative of this class. We performed multiwavelength observations spreading from the optical to the radio regimes. Based on optical imaging and long-slit spectroscopic data, we found that the northern parts of the large-scale nebula are predominantly blueshifted, while the southern regions appear mostly redshifted. We developed a geometrical model consisting of two double cones. Although various observational features can be approximated with such a scenario, themore » observed velocity pattern is more complex. Using near-infrared integral-field unit spectroscopy, we studied the hot molecular gas in the vicinity of the star. The emission from the hot CO gas arises in a small-scale disk revolving around the star on Keplerian orbits. Although the disk itself cannot be spatially resolved, its emission is reflected by the dust arranged in arc-like structures and the clumps surrounding MWC 137 on small scales. In the radio regime, we mapped the cold molecular gas in the outskirts of the optical nebula. We found that large amounts of cool molecular gas and warm dust embrace the optical nebula in the east, south, and west. No cold gas or dust was detected in the north and northwestern regions. Despite the new insights into the nebula kinematics gained from our studies, the real formation scenario of the large-scale nebula remains an open issue.« less

  20. Chromatographic hydrogen isotope separation

    DOEpatents

    Aldridge, Frederick T.

    1981-01-01

    Intermetallic compounds with the CaCu.sub.5 type of crystal structure, particularly LaNiCo.sub.4 and CaNi.sub.5, exhibit high separation factors and fast equilibrium times and therefore are useful for packing a chromatographic hydrogen isotope separation colum. The addition of an inert metal to dilute the hydride improves performance of the column. A large scale mutli-stage chromatographic separation process run as a secondary process off a hydrogen feedstream from an industrial plant which uses large volumes of hydrogen can produce large quantities of heavy water at an effective cost for use in heavy water reactors.

  1. Non-negative Tensor Factorization for Robust Exploratory Big-Data Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandrov, Boian; Vesselinov, Velimir Valentinov; Djidjev, Hristo Nikolov

    Currently, large multidimensional datasets are being accumulated in almost every field. Data are: (1) collected by distributed sensor networks in real-time all over the globe, (2) produced by large-scale experimental measurements or engineering activities, (3) generated by high-performance simulations, and (4) gathered by electronic communications and socialnetwork activities, etc. Simultaneous analysis of these ultra-large heterogeneous multidimensional datasets is often critical for scientific discoveries, decision-making, emergency response, and national and global security. The importance of such analyses mandates the development of the next-generation of robust machine learning (ML) methods and tools for bigdata exploratory analysis.

  2. Chromatographic hydrogen isotope separation

    DOEpatents

    Aldridge, F.T.

    Intermetallic compounds with the CaCu/sub 5/ type of crystal structure, particularly LaNiCo/sub 4/ and CaNi/sub 5/, exhibit high separation factors and fast equilibrium times and therefore are useful for packing a chromatographic hydrogen isotope separation column. The addition of an inert metal to dilute the hydride improves performance of the column. A large scale multi-stage chromatographic separation process run as a secondary process off a hydrogen feedstream from an industrial plant which uses large volumes of hydrogen cn produce large quantities of heavy water at an effective cost for use in heavy water reactors.

  3. Attribution of Large-Scale Climate Patterns to Seasonal Peak-Flow and Prospects for Prediction Globally

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Ward, Philip; Block, Paul

    2018-02-01

    Flood-related fatalities and impacts on society surpass those from all other natural disasters globally. While the inclusion of large-scale climate drivers in streamflow (or high-flow) prediction has been widely studied, an explicit link to global-scale long-lead prediction is lacking, which can lead to an improved understanding of potential flood propensity. Here we attribute seasonal peak-flow to large-scale climate patterns, including the El Niño Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO), using streamflow station observations and simulations from PCR-GLOBWB, a global-scale hydrologic model. Statistically significantly correlated climate patterns and streamflow autocorrelation are subsequently applied as predictors to build a global-scale season-ahead prediction model, with prediction performance evaluated by the mean squared error skill score (MSESS) and the categorical Gerrity skill score (GSS). Globally, fair-to-good prediction skill (20% ≤ MSESS and 0.2 ≤ GSS) is evident for a number of locations (28% of stations and 29% of land area), most notably in data-poor regions (e.g., West and Central Africa). The persistence of such relevant climate patterns can improve understanding of the propensity for floods at the seasonal scale. The prediction approach developed here lays the groundwork for further improving local-scale seasonal peak-flow prediction by identifying relevant global-scale climate patterns. This is especially attractive for regions with limited observations and or little capacity to develop flood early warning systems.

  4. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  5. Computational nuclear quantum many-body problem: The UNEDF project

    NASA Astrophysics Data System (ADS)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  6. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H.-W.; Chang, N.-B., E-mail: nchang@mail.ucf.ed; Chen, J.-C.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19more » large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.« less

  7. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  8. Selection and Manufacturing of Membrane Materials for Solar Sails

    NASA Technical Reports Server (NTRS)

    Bryant, Robert G.; Seaman, Shane T.; Wilkie, W. Keats; Miyaucchi, Masahiko; Working, Dennis C.

    2013-01-01

    Commercial metallized polyimide or polyester films and hand-assembly techniques are acceptable for small solar sail technology demonstrations, although scaling this approach to large sail areas is impractical. Opportunities now exist to use new polymeric materials specifically designed for solar sailing applications, and take advantage of integrated sail manufacturing to enable large-scale solar sail construction. This approach has, in part, been demonstrated on the JAXA IKAROS solar sail demonstrator, and NASA Langley Research Center is now developing capabilities to produce ultrathin membranes for solar sails by integrating resin synthesis with film forming and sail manufacturing processes. This paper will discuss the selection and development of polymer material systems for space, and these new processes for producing ultrathin high-performance solar sail membrane films.

  9. Impurity engineering of Czochralski silicon used for ultra large-scaled-integrated circuits

    NASA Astrophysics Data System (ADS)

    Yang, Deren; Chen, Jiahe; Ma, Xiangyang; Que, Duanlin

    2009-01-01

    Impurities in Czochralski silicon (Cz-Si) used for ultra large-scaled-integrated (ULSI) circuits have been believed to deteriorate the performance of devices. In this paper, a review of the recent processes from our investigation on internal gettering in Cz-Si wafers which were doped with nitrogen, germanium and/or high content of carbon is presented. It has been suggested that those impurities enhance oxygen precipitation, and create both denser bulk microdefects and enough denuded zone with the desirable width, which is benefit of the internal gettering of metal contamination. Based on the experimental facts, a potential mechanism of impurity doping on the internal gettering structure is interpreted and, a new concept of 'impurity engineering' for Cz-Si used for ULSI is proposed.

  10. Extreme-Scale De Novo Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georganas, Evangelos; Hofmeyr, Steven; Egan, Rob

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and themore » large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.« less

  11. Large-scale synthesis of monodisperse magnesium ferrite via an environmentally friendly molten salt route.

    PubMed

    Lou, Zhengsong; He, Minglong; Wang, Ruikun; Qin, Weiwei; Zhao, Dejian; Chen, Changle

    2014-02-17

    Sub-micrometer-sized magnesium ferrite spheres consisting of uniform small particles have been prepared using a facile, large-scale solid-state reaction employing a molten salt technique. Extensive structural characterization of the as-prepared samples has been performed using scanning electron microscope, transmission electron microscopy, high-resolution transmission electron microscopy, selected area electron diffraction, and X-ray diffraction. The yield of the magnesium ferrite sub-micrometer spheres is up to 90%, and these sub-micrometer spheres are made up of square and rectangular nanosheets. The magnetic properties of magnesium ferrite sub-micrometer spheres are investigated, and the magnetization saturation value is about 24.96 emu/g. Moreover, the possible growth mechanism is proposed based on the experimental results.

  12. High-Lift Engine Aeroacoustics Technology (HEAT) Test Program Overview

    NASA Technical Reports Server (NTRS)

    Zuniga, Fanny A.; Smith, Brian E.

    1999-01-01

    The NASA High-Speed Research program developed the High-Lift Engine Aeroacoustics Technology (HEAT) program to demonstrate satisfactory interaction between the jet noise suppressor and high-lift system of a High-Speed Civil Transport (HSCT) configuration at takeoff, climb, approach and landing conditions. One scheme for reducing jet exhaust noise generated by an HSCT is the use of a mixer-ejector system which would entrain large quantities of ambient air into the nozzle exhaust flow through secondary inlets in order to cool and slow the jet exhaust before it exits the nozzle. The effectiveness of such a noise suppression device must be evaluated in the presence of an HSCT wing high-lift system before definitive assessments can be made concerning its acoustic performance. In addition, these noise suppressors must provide the required acoustic attenuation while not degrading the thrust efficiency of the propulsion system or the aerodynamic performance of the high-lift devices on the wing. Therefore, the main objective of the HEAT program is to demonstrate these technologies and understand their interactions on a large-scale HSCT model. The HEAT program is a collaborative effort between NASA-Ames, Boeing Commercial Airplane Group, Douglas Aircraft Corp., Lockheed-Georgia, General Electric and NASA - Lewis. The suppressor nozzles used in the tests were Generation 1 2-D mixer-ejector nozzles made by General Electric. The model used was a 13.5%-scale semi-span model of a Boeing Reference H configuration.

  13. Natural snowfall reveals large-scale flow structures in the wake of a 2.5-MW wind turbine.

    PubMed

    Hong, Jiarong; Toloui, Mostafa; Chamorro, Leonardo P; Guala, Michele; Howard, Kevin; Riley, Sean; Tucker, James; Sotiropoulos, Fotis

    2014-06-24

    To improve power production and structural reliability of wind turbines, there is a pressing need to understand how turbines interact with the atmospheric boundary layer. However, experimental techniques capable of quantifying or even qualitatively visualizing the large-scale turbulent flow structures around full-scale turbines do not exist today. Here we use snowflakes from a winter snowstorm as flow tracers to obtain velocity fields downwind of a 2.5-MW wind turbine in a sampling area of ~36 × 36 m(2). The spatial and temporal resolutions of the measurements are sufficiently high to quantify the evolution of blade-generated coherent motions, such as the tip and trailing sheet vortices, identify their instability mechanisms and correlate them with turbine operation, control and performance. Our experiment provides an unprecedented in situ characterization of flow structures around utility-scale turbines, and yields significant insights into the Reynolds number similarity issues presented in wind energy applications.

  14. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  15. Large Scale Triboelectric Nanogenerator and Self-Powered Pressure Sensor Array Using Low Cost Roll-to-Roll UV Embossing

    PubMed Central

    Dhakar, Lokesh; Gudla, Sudeep; Shan, Xuechuan; Wang, Zhiping; Tay, Francis Eng Hock; Heng, Chun-Huat; Lee, Chengkuo

    2016-01-01

    Triboelectric nanogenerators (TENGs) have emerged as a potential solution for mechanical energy harvesting over conventional mechanisms such as piezoelectric and electromagnetic, due to easy fabrication, high efficiency and wider choice of materials. Traditional fabrication techniques used to realize TENGs involve plasma etching, soft lithography and nanoparticle deposition for higher performance. But lack of truly scalable fabrication processes still remains a critical challenge and bottleneck in the path of bringing TENGs to commercial production. In this paper, we demonstrate fabrication of large scale triboelectric nanogenerator (LS-TENG) using roll-to-roll ultraviolet embossing to pattern polyethylene terephthalate sheets. These LS-TENGs can be used to harvest energy from human motion and vehicle motion from embedded devices in floors and roads, respectively. LS-TENG generated a power density of 62.5 mW m−2. Using roll-to-roll processing technique, we also demonstrate a large scale triboelectric pressure sensor array with pressure detection sensitivity of 1.33 V kPa−1. The large scale pressure sensor array has applications in self-powered motion tracking, posture monitoring and electronic skin applications. This work demonstrates scalable fabrication of TENGs and self-powered pressure sensor arrays, which will lead to extremely low cost and bring them closer to commercial production. PMID:26905285

  16. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated hydrologic modeling framework which facilitates a seamless, computationally efficient and accurate integration of the process model with the data model. The flexibility of this framework leads to implementation of multiscale, multiresolution, adaptive refinement/de-refinement and nested modeling simulations with least computational burden. However, performing these simulations and related calibration of these models over a large basin at higher spatio- temporal resolutions is computationally intensive and requires use of increasing computing power. With the advent of parallel processing architectures, high computing performance can be achieved by parallelization of existing serial integrated-hydrologic-model code. This translates to running the same model simulation on a network of large number of processors thereby reducing the time needed to obtain solution. The paper also discusses the implementation of the integrated model on parallel processors. Also will be discussed the mapping of the problem on multi-processor environment, method to incorporate coupling between hydrologic processes using interprocessor communication models, model data structure and parallel numerical algorithms to obtain high performance.

  17. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  18. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  19. Very high temperature fiber processing and testing through the use of ultrahigh solar energy concentration

    NASA Astrophysics Data System (ADS)

    Jacobson, Benjamin A.; Gleckman, Philip L.; Holman, Robert L.; Sagie, Daniel; Winston, Roland

    1991-10-01

    We have demonstrated the feasibility of a high temperature cool-wall optical furnace that harnesses the unique power of concentrated solar heating for advanced materials processing and testing. Out small-scale test furnace achieved temperatures as high as 2400 C within a 10 mm X 0.44 mm cylindrical hot-zone. Optimum performance and efficiency resulted from an innovative two-stage optical design using a long-focal length, point-focus, conventional primary concentrator and a non-imaging secondary concentrator specifically designed for the cylindrical geometry of the target fiber. A scale-up analysis suggests that even higher temperatures can be achieved over hot zones large enough for practical commercial fiber post- processing and testing.

  20. Development of superconductor magnetic suspension and balance prototype facility for studying the feasibility of applying this technique to large scale aerodynamic testing

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    The unique design and operational characteristics of a prototype magnetic suspension and balance facility which utilizes superconductor technology are described and discussed from the point of view of scalability to large sizes. The successful experimental demonstration of the feasibility of this new magnetic suspension concept of the University of Virginia, together with the success of the cryogenic wind-tunnel concept developed at Langley Research Center, appear to have finally opened the way to clean-tunnel, high-Re aerodynamic testing. Results of calculations corresponding to a two-step design extrapolation from the observed performance of the prototype magnetic suspension system to a system compatible with the projected cryogenic transonic research tunnel are presented to give an order-of-magnitude estimate of expected performance characteristics. Research areas where progress should lead to improved design and performance of large facilities are discussed.

  1. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  2. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  3. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE PAGES

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    2017-01-01

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  4. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  5. Scaling Phenomenology in Meson Photoproduction from CLAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biplab Dey, Curtis A. Meyer

    2010-08-01

    In the high energy limit, perturbative QCD predicts that hard scattering amplitudes should follow simple scaling laws. For hard scattering at 90°, we show that experiments support this prediction even in the “medium energy” regime of 2.3 GeV<=sqrt(s)<=2.84 GeV, as long as there are no s-channel resonances present. Our data consists of high statistics measurements for five different exclusive meson photoproduction channels (pomega, peta, peta[prime], K+Lambdaand K+[summation]0) recently obtained from CLAS at Jefferson Lab. The same power-law scaling also leads to “saturated” Regge trajectories at high energies. That is, at large -t and -u, Regge trajectories must approach constant negativemore » integers. We demonstrate the application of saturated Regge phenomenology by performing a partial wave analysis fit to the gammayp-->peta[prime]differential cross sections.« less

  6. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Pincus, R.

    2016-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation and cloudiness. Unlike other similar methods, only one new prognostic variable, turbulent kinetic energy (TKE), needs to be intoduced, making the technique computationally efficient.SHOC is now incorporated into a version of GFS, as well as into the next generation of the NCEP global model - NOAA Environmental Modeling System (NEMS). Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these variables. Radiative transfer parameterization uses cloudiness computed by SHOC.Outstanding problems include high level tropical cloud fraction being too high in SHOC runs, possibly related to the interaction of SHOC with condensate detrained from deep convection.Future work will consist of evaluating model performance and tuning the physics if necessary, by performing medium-range NWP forecasts with prescribed initial conditions, and AMIP-type climate tests with prescribed SSTs. Depending on the results, the model will be tuned or parameterizations modified. Next, SHOC will be implemented in the NCEP CFS, and tuned and evaluated for climate applications - seasonal prediction and long coupled climate runs. Impact of new physics on ENSO, MJO, ISO, monsoon variability, etc will be examined.

  7. Advanced spacecraft: What will they look like and why

    NASA Technical Reports Server (NTRS)

    Price, Humphrey W.

    1990-01-01

    The next century of spaceflight will witness an expansion in the physical scale of spacecraft, from the extreme of the microspacecraft to the very large megaspacecraft. This will respectively spawn advances in highly integrated and miniaturized components, and also advances in lightweight structures, space fabrication, and exotic control systems. Challenges are also presented by the advent of advanced propulsion systems, many of which require controlling and directing hot plasma, dissipating large amounts of waste heat, and handling very high radiation sources. Vehicle configuration studies for a number of theses types of advanced spacecraft were performed, and some of them are presented along with the rationale for their physical layouts.

  8. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  9. Leading Educational Change and Improvement at Scale: Some Inconvenient Truths about System Performance

    ERIC Educational Resources Information Center

    Harris, Alma; Jones, Michelle

    2017-01-01

    The challenges of securing educational change and transformation, at scale, remain considerable. While sustained progress has been made in some education systems (Fullan, 2009; Hargreaves & Shirley, 2009) generally, it remains the case that the pathway to large-scale, system improvement is far from easy or straightforward. While large-scale…

  10. Prepreg and Melt Infiltration Technology Developed for Affordable, Robust Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, Mrityunjay; Petko, Jeannie F.

    2004-01-01

    Affordable fiber-reinforced ceramic matrix composites with multifunctional properties are critically needed for high-temperature aerospace and space transportation applications. These materials have various applications in advanced high-efficiency and high-performance engines, airframe and propulsion components for next-generation launch vehicles, and components for land-based systems. A number of these applications require materials with specific functional characteristics: for example, thick component, hybrid layups for environmental durability and stress management, and self-healing and smart composite matrices. At present, with limited success and very high cost, traditional composite fabrication technologies have been utilized to manufacture some large, complex-shape components of these materials. However, many challenges still remain in developing affordable, robust, and flexible manufacturing technologies for large, complex-shape components with multifunctional properties. The prepreg and melt infiltration (PREMI) technology provides an affordable and robust manufacturing route for low-cost, large-scale production of multifunctional ceramic composite components.

  11. High-Tc superconducting materials for electric power applications.

    PubMed

    Larbalestier, D; Gurevich, A; Feldmann, D M; Polyanskii, A

    2001-11-15

    Large-scale superconducting electric devices for power industry depend critically on wires with high critical current densities at temperatures where cryogenic losses are tolerable. This restricts choice to two high-temperature cuprate superconductors, (Bi,Pb)2Sr2Ca2Cu3Ox and YBa2Cu3Ox, and possibly to MgB2, recently discovered to superconduct at 39 K. Crystal structure and material anisotropy place fundamental restrictions on their properties, especially in polycrystalline form. So far, power applications have followed a largely empirical, twin-track approach of conductor development and construction of prototype devices. The feasibility of superconducting power cables, magnetic energy-storage devices, transformers, fault current limiters and motors, largely using (Bi,Pb)2Sr2Ca2Cu3Ox conductor, is proven. Widespread applications now depend significantly on cost-effective resolution of fundamental materials and fabrication issues, which control the production of low-cost, high-performance conductors of these remarkable compounds.

  12. Ultra-high gain diffusion-driven organic transistor.

    PubMed

    Torricelli, Fabrizio; Colalongo, Luigi; Raiteri, Daniele; Kovács-Vajna, Zsolt Miklós; Cantatore, Eugenio

    2016-02-01

    Emerging large-area technologies based on organic transistors are enabling the fabrication of low-cost flexible circuits, smart sensors and biomedical devices. High-gain transistors are essential for the development of large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show a new organic field-effect transistor architecture with a gain larger than 700. This is the highest gain ever reported for organic field-effect transistors. In the proposed organic field-effect transistor, the charge injection and extraction at the metal-semiconductor contacts are driven by the charge diffusion. The ideal conditions of ohmic contacts with negligible contact resistance and flat current saturation are demonstrated. The approach is general and can be extended to any thin-film technology opening unprecedented opportunities for the development of high-performance flexible electronics.

  13. Ultra-high gain diffusion-driven organic transistor

    NASA Astrophysics Data System (ADS)

    Torricelli, Fabrizio; Colalongo, Luigi; Raiteri, Daniele; Kovács-Vajna, Zsolt Miklós; Cantatore, Eugenio

    2016-02-01

    Emerging large-area technologies based on organic transistors are enabling the fabrication of low-cost flexible circuits, smart sensors and biomedical devices. High-gain transistors are essential for the development of large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show a new organic field-effect transistor architecture with a gain larger than 700. This is the highest gain ever reported for organic field-effect transistors. In the proposed organic field-effect transistor, the charge injection and extraction at the metal-semiconductor contacts are driven by the charge diffusion. The ideal conditions of ohmic contacts with negligible contact resistance and flat current saturation are demonstrated. The approach is general and can be extended to any thin-film technology opening unprecedented opportunities for the development of high-performance flexible electronics.

  14. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    PubMed

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Scaling beta-delayed neutron measurements to large detector areas

    NASA Astrophysics Data System (ADS)

    Sutanto, F.; Nattress, J.; Jovanovic, I.

    2017-08-01

    We explore the performance of a cargo screening system that consists of two large-sized composite scintillation detectors and a high-energy neutron interrogation source by modeling and simulation. The goal of the system is to measure β-delayed neutron emission from an illicit special nuclear material by use of active interrogation. This task is challenging because the β-delayed neutron yield is small in comparison with the yield of the prompt fission secondary products, β-delayed neutrons are emitted with relatively low energies, and high neutron and gamma backgrounds are typically present. Detectors used to measure delayed neutron emission must exhibit high intrinsic efficiency and cover a large solid angle, which also makes them sensitive to background neutron radiation. We present a case study where we attempt to detect the presence of 5 kg-scale quantities of 235U in a standard air-filled cargo container using 14 MeV neutrons as a probe. We find that by using a total measurement time of ˜11.6 s and a dose equivalent of ˜1.7 mrem, the presence of 235U can be detected with false positive and false negative probabilities that are both no larger than 0.1%.

  16. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  17. Estimating planktonic diversity through spatial dominance patterns in a model ocean.

    PubMed

    Soccodato, Alice; d'Ovidio, Francesco; Lévy, Marina; Jahn, Oliver; Follows, Michael J; De Monte, Silvia

    2016-10-01

    In the open ocean, the observation and quantification of biodiversity patterns is challenging. Marine ecosystems are indeed largely composed by microbial planktonic communities whose niches are affected by highly dynamical physico-chemical conditions, and whose observation requires advanced methods for morphological and molecular classification. Optical remote sensing offers an appealing complement to these in-situ techniques. Global-scale coverage at high spatiotemporal resolution is however achieved at the cost of restrained information on the local assemblage. Here, we use a coupled physical and ecological model ocean simulation to explore one possible metrics for comparing measures performed on such different scales. We show that a large part of the local diversity of the virtual plankton ecosystem - corresponding to what accessible by genomic methods - can be inferred from crude, but spatially extended, information - as conveyed by remote sensing. Shannon diversity of the local community is indeed highly correlated to a 'seascape' index, which quantifies the surrounding spatial heterogeneity of the most abundant functional group. The error implied in drastically reducing the resolution of the plankton community is shown to be smaller in frontal regions as well as in regions of intermediate turbulent energy. On the spatial scale of hundreds of kms, patterns of virtual plankton diversity are thus largely sustained by mixing communities that occupy adjacent niches. We provide a proof of principle that in the open ocean information on spatial variability of communities can compensate for limited local knowledge, suggesting the possibility of integrating in-situ and satellite observations to monitor biodiversity distribution at the global scale. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Simultaneous wall-shear-stress and wide-field PIV measurements in a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Gomit, Guillaume; Fourrie, Gregoire; de Kat, Roeland; Ganapathisubramani, Bharathram

    2015-11-01

    Simultaneous particle image velocimetry (PIV) and hot-film shear stress sensor measurements were performed to study the large-scale structures associated with shear stress events in a flat plate turbulent boundary layer at a high Reynolds number (Reτ ~ 4000). The PIV measurement was performed in a streamwise-wall normal plane using an array of six high resolution cameras (4 ×16MP and 2 ×29MP). The resulting field of view covers 8 δ (where δ is the boundary layer thickness) in the streamwise direction and captures the entire boundary layer in the wall-normal direction. The spatial resolution of the measurement is approximately is approximately 70 wall units (1.8 mm) and sampled each 35 wall units (0.9 mm). In association with the PIV setup, a spanwise array of 10 skin-friction sensors (spanning one δ) was used to capture the footprint of the large-scale structures. This combination of measurements allowed the analysis of the three-dimensional conditional structures in the boundary layer. Particularly, from conditional averages, the 3D organisation of the wall normal and streamwise velocity components (u and v) and the Reynolds shear stress (-u'v') related to a low and high shear stress events can be extracted. European Research Council Grant No-277472-WBT.

  19. Two stage hydrolysis of corn stover at high solids content for mixing power saving and scale-up applications.

    PubMed

    Liu, Ke; Zhang, Jian; Bao, Jie

    2015-11-01

    A two stage hydrolysis of corn stover was designed to solve the difficulties between sufficient mixing at high solids content and high power input encountered in large scale bioreactors. The process starts with the quick liquefaction to convert solid cellulose to liquid slurry with strong mixing in small reactors, then followed the comprehensive hydrolysis to complete saccharification into fermentable sugars in large reactors without agitation apparatus. 60% of the mixing energy consumption was saved by removing the mixing apparatus in large scale vessels. Scale-up ratio was small for the first step hydrolysis reactors because of the reduced reactor volume. For large saccharification reactors in the second step, the scale-up was easy because of no mixing mechanism was involved. This two stage hydrolysis is applicable for either simple hydrolysis or combined fermentation processes. The method provided a practical process option for industrial scale biorefinery processing of lignocellulose biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A new large area scintillator screen for X-ray imaging

    NASA Astrophysics Data System (ADS)

    Nagarkar, V. V.; Miller, S. R.; Tipnis, S. V.; Lempicki, A.; Brecher, C.; Lingertat, H.

    2004-01-01

    We report on the development of a new, large area, powdered scintillator screen based on Lu 2O 3(Eu). As reported earlier, the transparent ceramic form of this material has a very high density of 9.4 g/cm 3, a high light output comparable to that of CsI(Tl), and emits in a narrow spectral band centered at about 610 nm. Research into fabrication of this ceramic scintillator in a large area format is currently underway, however the process is not yet practical for large scale production. Here we have explored fabrication of large area screens using precursor powders from which the ceramics are fabricated. To date we have produced up to 16 × 16 cm 2 area screens with thickness in the range of 18 mg/cm 2. This paper outlines the screen fabrication technique and presents its imaging performance in comparison with a commercial Gd 2O 2S:Tb (GOS) screen.

Top