Sample records for enable large scale

  1. Large Scale PEM Electrolysis to Enable Renewable Hydrogen Fuel Production

    DTIC Science & Technology

    2010-02-10

    PEM Fuel Cell Anode + -Cathode e- e- e- e- Electric load...BOP system. • Enables new product launch (C- Series) Proton PEM cell stack for UK Vanguard subs 18UNCLASSIFIED: Dist A. Approved for public release...UNCLASSIFIED: Dist A. Approved for public release “Large Scale PEM Electrolysis to Enable Renewable Hydrogen Fuel Production” Alternative Energy

  2. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  3. The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems

    ERIC Educational Resources Information Center

    Diamanti, Eirini Ilana

    2012-01-01

    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…

  4. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  5. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  6. Approximate Computing Techniques for Iterative Graph Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less

  7. Large Composite Structures Processing Technologies for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.

    2001-01-01

    Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.

  8. Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.

    PubMed

    Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan

    2017-08-15

    Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.

  9. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data.

    PubMed

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2018-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  10. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  11. The Use of Weighted Graphs for Large-Scale Genome Analysis

    PubMed Central

    Zhou, Fang; Toivonen, Hannu; King, Ross D.

    2014-01-01

    There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061

  12. Miniaturized integration of a fluorescence microscope

    PubMed Central

    Ghosh, Kunal K.; Burns, Laurie D.; Cocker, Eric D.; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J.

    2013-01-01

    The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals towards relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including semiconductor light source and sensor. This device enables high-speed cellular-level imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens. PMID:21909102

  13. Miniaturized integration of a fluorescence microscope.

    PubMed

    Ghosh, Kunal K; Burns, Laurie D; Cocker, Eric D; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J

    2011-09-11

    The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals for relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including a semiconductor light source and sensor. This device enables high-speed cellular imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens.

  14. Building the Foundations for a Large-Scale, Cross-Sector Collaboration for a Sustainable and Permanent Return to the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Kapoglou, A.

    2017-10-01

    This presentation will describe how to build the foundations needed for a large scale, cross-industry collaboration to enable a sustainable and permanent return to the Moon based on system leadership, cross-sector partnership, and inclusive business.

  15. A large-scale photonic node architecture that utilizes interconnected OXC subsystems.

    PubMed

    Iwai, Yuto; Hasegawa, Hiroshi; Sato, Ken-ichi

    2013-01-14

    We propose a novel photonic node architecture that is composed of interconnected small-scale optical cross-connect subsystems. We also developed an efficient dynamic network control algorithm that complies with a restriction on the number of intra-node fibers used for subsystem interconnection. Numerical evaluations verify that the proposed architecture offers almost the same performance as the equivalent single large-scale cross-connect switch, while enabling substantial hardware scale reductions.

  16. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  17. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  18. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  19. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    PubMed

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  20. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  1. Large-scale synthesis of arrays of high-aspect-ratio rigid vertically aligned carbon nanofibres

    NASA Astrophysics Data System (ADS)

    Melechko, A. V.; McKnight, T. E.; Hensley, D. K.; Guillorn, M. A.; Borisevich, A. Y.; Merkulov, V. I.; Lowndes, D. H.; Simpson, M. L.

    2003-09-01

    We report on techniques for catalytic synthesis of rigid, high-aspect-ratio, vertically aligned carbon nanofibres by dc plasma enhanced chemical vapour deposition that are tailored for applications that require arrays of individual fibres that feature long fibre lengths (up to 20 µm) such as scanning probe microscopy, penetrant cell and tissue probing arrays and mechanical insertion approaches for gene delivery to cell cultures. We demonstrate that the definition of catalyst nanoparticles is the critical step that enables growth of individual, long-length fibres and discuss methods for catalyst particle preparation that allow the growth of individual isolated nanofibres from catalyst dots with diameters as large as 500 nm. This development enables photolithographic definition of catalyst and therefore the inexpensive, large-scale production of such arrays.

  2. Instantaneous variance scaling of AIRS thermodynamic profiles using a circular area Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.

    2018-05-01

    Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.

  3. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  4. Large Scale Spectral Line Mapping of Galactic Regions with CCAT-Prime

    NASA Astrophysics Data System (ADS)

    Simon, Robert

    2018-01-01

    CCAT-prime is a 6-m submillimeter telescope that is being built on the top of Cerro Chajnantor (5600 m altitude) overlooking the ALMA plateau in the Atacama Desert. Its novel Crossed-Dragone design enables a large field of view without blockage and is thus particularly well suited for large scale surveys in the continuum and spectral lines targeting important questions ranging from star formation in the Milky Way to cosmology. On this poster, we focus on the large scale mapping opportunities in important spectral cooling lines of the interstellar medium opened up by CCAT-prime and the Cologne heterodyne instrument CHAI.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  6. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  7. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  8. Large-scale deep learning for robotically gathered imagery for science

    NASA Astrophysics Data System (ADS)

    Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.

    2016-12-01

    With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.

  9. Designing for Scale: Reflections on Rolling Out Reading Improvement in Kenya and Liberia.

    PubMed

    Gove, Amber; Korda Poole, Medina; Piper, Benjamin

    2017-03-01

    Since 2008, the Ministries of Education in Liberia and Kenya have undertaken transitions from small-scale pilot programs to improve reading outcomes among primary learners to the large-scale implementation of reading interventions. The effects of the pilots on learning outcomes were significant, but questions remained regarding whether such large gains could be sustained at scale. In this article, the authors dissect the Liberian and Kenyan experiences with implementing large-scale reading programs, documenting the critical components and conditions of the program designs that affected the likelihood of successfully transitioning from pilot to scale. They also review the design, deployment, and effectiveness of each pilot program and the scale, design, duration, enabling conditions, and initial effectiveness results of the scaled programs in each country. The implications of these results for the design of both pilot and large-scale reading programs are discussed in light of the experiences of both the Liberian and Kenyan programs. © 2017 Wiley Periodicals, Inc.

  10. Large-Scale, Three–Dimensional, Free–Standing, and Mesoporous Metal Oxide Networks for High–Performance Photocatalysis

    PubMed Central

    Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng

    2013-01-01

    Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595

  11. Method for large-scale fabrication of atomic-scale structures on material surfaces using surface vacancies

    DOEpatents

    Lim, Chong Wee; Ohmori, Kenji; Petrov, Ivan Georgiev; Greene, Joseph E.

    2004-07-13

    A method for forming atomic-scale structures on a surface of a substrate on a large-scale includes creating a predetermined amount of surface vacancies on the surface of the substrate by removing an amount of atoms on the surface of the material corresponding to the predetermined amount of the surface vacancies. Once the surface vacancies have been created, atoms of a desired structure material are deposited on the surface of the substrate to enable the surface vacancies and the atoms of the structure material to interact. The interaction causes the atoms of the structure material to form the atomic-scale structures.

  12. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  13. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Astrophysics Data System (ADS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dünner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-Yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2016-08-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe ˜ 70 % of the sky. A variable-delay polarization modulator provides modulation of the polarization at ˜ 10 Hz to suppress the 1/ f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  14. Optical interconnect for large-scale systems

    NASA Astrophysics Data System (ADS)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  15. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  16. Transmission Infrastructure | Energy Analysis | NREL

    Science.gov Websites

    aggregating geothermal with other complementary generating technologies, in renewable energy zones infrastructure planning and expansion to enable large-scale deployment of renewable energy in the future. Large Energy, FERC, NERC, and the regional entities, transmission providers, generating companies, utilities

  17. The influence of large-scale wind power on global climate.

    PubMed

    Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J

    2004-11-16

    Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.

  18. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  19. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  20. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  1. BioPlex Display: An Interactive Suite for Large-Scale AP-MS Protein-Protein Interaction Data.

    PubMed

    Schweppe, Devin K; Huttlin, Edward L; Harper, J Wade; Gygi, Steven P

    2018-01-05

    The development of large-scale data sets requires a new means to display and disseminate research studies to large audiences. Knowledge of protein-protein interaction (PPI) networks has become a principle interest of many groups within the field of proteomics. At the confluence of technologies, such as cross-linking mass spectrometry, yeast two-hybrid, protein cofractionation, and affinity purification mass spectrometry (AP-MS), detection of PPIs can uncover novel biological inferences at a high-throughput. Thus new platforms to provide community access to large data sets are necessary. To this end, we have developed a web application that enables exploration and dissemination of the growing BioPlex interaction network. BioPlex is a large-scale interactome data set based on AP-MS of baits from the human ORFeome. The latest BioPlex data set release (BioPlex 2.0) contains 56 553 interactions from 5891 AP-MS experiments. To improve community access to this vast compendium of interactions, we developed BioPlex Display, which integrates individual protein querying, access to empirical data, and on-the-fly annotation of networks within an easy-to-use and mobile web application. BioPlex Display enables rapid acquisition of data from BioPlex and development of hypotheses based on protein interactions.

  2. Technical Assessment: Integrated Photonics

    DTIC Science & Technology

    2015-10-01

    in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of

  3. Simulation of nitrate reduction in groundwater - An upscaling approach from small catchments to the Baltic Sea basin

    NASA Astrophysics Data System (ADS)

    Hansen, A. L.; Donnelly, C.; Refsgaard, J. C.; Karlsson, I. B.

    2018-01-01

    This paper describes a modeling approach proposed to simulate the impact of local-scale, spatially targeted N-mitigation measures for the Baltic Sea Basin. Spatially targeted N-regulations aim at exploiting the considerable spatial differences in the natural N-reduction taking place in groundwater and surface water. While such measures can be simulated using local-scale physically-based catchment models, use of such detailed models for the 1.8 million km2 Baltic Sea basin is not feasible due to constraints on input data and computing power. Large-scale models that are able to simulate the Baltic Sea basin, on the other hand, do not have adequate spatial resolution to simulate some of the field-scale measures. Our methodology combines knowledge and results from two local-scale physically-based MIKE SHE catchment models, the large-scale and more conceptual E-HYPE model, and auxiliary data in order to enable E-HYPE to simulate how spatially targeted regulation of agricultural practices may affect N-loads to the Baltic Sea. We conclude that the use of E-HYPE with this upscaling methodology enables the simulation of the impact on N-loads of applying a spatially targeted regulation at the Baltic Sea basin scale to the correct order-of-magnitude. The E-HYPE model together with the upscaling methodology therefore provides a sound basis for large-scale policy analysis; however, we do not expect it to be sufficiently accurate to be useful for the detailed design of local-scale measures.

  4. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  5. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  6. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  7. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  8. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  9. Molecular inversion probe assay.

    PubMed

    Absalan, Farnaz; Ronaghi, Mostafa

    2007-01-01

    We have described molecular inversion probe technologies for large-scale genetic analyses. This technique provides a comprehensive and powerful tool for the analysis of genetic variation and enables affordable, large-scale studies that will help uncover the genetic basis of complex disease and explain the individual variation in response to therapeutics. Major applications of the molecular inversion probes (MIP) technologies include targeted genotyping from focused regions to whole-genome studies, and allele quantification of genomic rearrangements. The MIP technology (used in the HapMap project) provides an efficient, scalable, and affordable way to score polymorphisms in case/control populations for genetic studies. The MIP technology provides the highest commercially available multiplexing levels and assay conversion rates for targeted genotyping. This enables more informative, genome-wide studies with either the functional (direct detection) approach or the indirect detection approach.

  10. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and describe the potential deployment of this information technology with other NASA applications.

  11. Detection of Chorus Elements and other Wave Signatures Using Geometric Computational Techniques in the Van Allen radiation belts

    NASA Astrophysics Data System (ADS)

    Sengupta, A.; Kletzing, C.; Howk, R.; Kurth, W. S.

    2017-12-01

    An important goal of the Van Allen Probes mission is to understand wave particle interactions that can energize relativistic electron in the Earth's Van Allen radiation belts. The EMFISIS instrumentation suite provides measurements of wave electric and magnetic fields of wave features such as chorus that participate in these interactions. Geometric signal processing discovers structural relationships, e.g. connectivity across ridge-like features in chorus elements to reveal properties such as dominant angles of the element (frequency sweep rate) and integrated power along the a given chorus element. These techniques disambiguate these wave features against background hiss-like chorus. This enables autonomous discovery of chorus elements across the large volumes of EMFISIS data. At the scale of individual or overlapping chorus elements, topological pattern recognition techniques enable interpretation of chorus microstructure by discovering connectivity and other geometric features within the wave signature of a single chorus element or between overlapping chorus elements. Thus chorus wave features can be quantified and studied at multiple scales of spectral geometry using geometric signal processing techniques. We present recently developed computational techniques that exploit spectral geometry of chorus elements and whistlers to enable large-scale automated discovery, detection and statistical analysis of these events over EMFISIS data. Specifically, we present different case studies across a diverse portfolio of chorus elements and discuss the performance of our algorithms regarding precision of detection as well as interpretation of chorus microstructure. We also provide large-scale statistical analysis on the distribution of dominant sweep rates and other properties of the detected chorus elements.

  12. Connecting the large- and the small-scale magnetic fields of solar-like stars

    NASA Astrophysics Data System (ADS)

    Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.

    2018-05-01

    A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallarno, George; Rogers, James H; Maxwell, Don E

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less

  14. Precision medicine in the age of big data: The present and future role of large-scale unbiased sequencing in drug discovery and development.

    PubMed

    Vicini, P; Fields, O; Lai, E; Litwack, E D; Martin, A-M; Morgan, T M; Pacanowski, M A; Papaluca, M; Perez, O D; Ringel, M S; Robson, M; Sakul, H; Vockley, J; Zaks, T; Dolsten, M; Søgaard, M

    2016-02-01

    High throughput molecular and functional profiling of patients is a key driver of precision medicine. DNA and RNA characterization has been enabled at unprecedented cost and scale through rapid, disruptive progress in sequencing technology, but challenges persist in data management and interpretation. We analyze the state-of-the-art of large-scale unbiased sequencing in drug discovery and development, including technology, application, ethical, regulatory, policy and commercial considerations, and discuss issues of LUS implementation in clinical and regulatory practice. © 2015 American Society for Clinical Pharmacology and Therapeutics.

  15. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  16. Geospatial intelligence and visual classification of environmentally observed species in the Future Internet

    NASA Astrophysics Data System (ADS)

    Arbab-Zavar, B.; Chakravarthy, A.; Sabeur, Z. A.

    2012-04-01

    The rapid development of advanced smart communication tools with good quality and resolution video cameras, audio and GPS devices in the last few years shall lead to profound impacts on the way future environmental observations are conducted and accessed by communities. The resulting large scale interconnections of these "Future Internet Things" form a large environmental sensing network which will generate large volumes of quality environmental observations and at highly localised spatial scales. This enablement in environmental sensing at local scales will be of great importance to contribute in the study of fauna and flora in the near future, particularly on the effect of climate change on biodiversity in various regions of Europe and beyond. The Future Internet could also potentially become the de facto information space to provide participative real-time sensing by communities and improve our situation awarness of the effect of climate on local environments. In the ENVIROFI(2011-2013) Usage Area project in the FP7 FI-PPP programme, a set of requirements for specific (and generic) enablers is achieved with the potential establishement of participating community observatories of the future. In particular, the specific enablement of interest concerns the building of future interoperable services for the management of environmental data intelligently with tagged contextual geo-spatial information generated by multiple operators in communities (Using smart phones). The classification of observed species in the resulting images is achieved with structured data pre-processing, semantic enrichement using contextual geospatial information, and high level fusion with controlled uncertainty estimations. The returned identification of species is further improved using future ground truth corrections and learning by the specific enablers.

  17. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  18. Large-Scale Brain Systems in ADHD: Beyond the Prefrontal-Striatal Model

    PubMed Central

    Castellanos, F. Xavier; Proal, Erika

    2012-01-01

    Attention-deficit/hyperactivity disorder (ADHD) has long been thought to reflect dysfunction of prefrontal-striatal circuitry, with involvement of other circuits largely ignored. Recent advances in systems neuroscience-based approaches to brain dysfunction enable the development of models of ADHD pathophysiology that encompass a number of different large-scale “resting state” networks. Here we review progress in delineating large-scale neural systems and illustrate their relevance to ADHD. We relate frontoparietal, dorsal attentional, motor, visual, and default networks to the ADHD functional and structural literature. Insights emerging from mapping intrinsic brain connectivity networks provide a potentially mechanistic framework for understanding aspects of ADHD, such as neuropsychological and behavioral inconsistency, and the possible role of primary visual cortex in attentional dysfunction in the disorder. PMID:22169776

  19. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  20. Commentary: Environmental nanophotonics and energy

    NASA Astrophysics Data System (ADS)

    Smith, Geoff B.

    2011-01-01

    The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.

  1. Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach

    PubMed Central

    Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.

    2016-01-01

    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048

  2. The Soldier Fitness Tracker: Global Delivery of Comprehensive Soldier Fitness

    ERIC Educational Resources Information Center

    Fravell, Mike; Nasser, Katherine; Cornum, Rhonda

    2011-01-01

    Carefully implemented technology strategies are vital to the success of large-scale initiatives such as the U.S. Army's Comprehensive Soldier Fitness (CSF) program. Achieving the U.S. Army's vision for CSF required a robust information technology platform that was scaled to millions of users and that leveraged the Internet to enable global reach.…

  3. Multi-source Geospatial Data Analysis with Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Erickson, T.

    2014-12-01

    The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org

  4. Optical power transfer and communication methods for wireless implantable sensing platforms.

    PubMed

    Mujeeb-U-Rahman, Muhammad; Adalian, Dvin; Chang, Chieh-Feng; Scherer, Axel

    2015-09-01

    Ultrasmall scale implants have recently attracted focus as valuable tools for monitoring both acute and chronic diseases. Semiconductor optical technologies are the key to miniaturizing these devices to the long-sought sub-mm scale, which will enable long-term use of these devices for medical applications. This can also enable the use of multiple implantable devices concurrently to form a true body area network of sensors. We demonstrate optical power transfer techniques and methods to effectively harness this power for implantable devices. Furthermore, we also present methods for optical data transfer from such implants. Simultaneous use of these technologies can result in miniaturized sensing platforms that can allow for large-scale use of such systems in real world applications.

  5. Optical power transfer and communication methods for wireless implantable sensing platforms

    NASA Astrophysics Data System (ADS)

    Mujeeb-U-Rahman, Muhammad; Adalian, Dvin; Chang, Chieh-Feng; Scherer, Axel

    2015-09-01

    Ultrasmall scale implants have recently attracted focus as valuable tools for monitoring both acute and chronic diseases. Semiconductor optical technologies are the key to miniaturizing these devices to the long-sought sub-mm scale, which will enable long-term use of these devices for medical applications. This can also enable the use of multiple implantable devices concurrently to form a true body area network of sensors. We demonstrate optical power transfer techniques and methods to effectively harness this power for implantable devices. Furthermore, we also present methods for optical data transfer from such implants. Simultaneous use of these technologies can result in miniaturized sensing platforms that can allow for large-scale use of such systems in real world applications.

  6. Trans-Proteomic Pipeline, a standardized data processing pipeline for large-scale reproducible proteomics informatics

    PubMed Central

    Deutsch, Eric W.; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L.

    2015-01-01

    Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include mass spectrometry to define protein sequence, protein:protein interactions, and protein post-translational modifications. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative mass spectrometry proteomics. It supports all major operating systems and instrument vendors via open data formats. Here we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of tandem mass spectrometry datasets, as well as some major upcoming features. PMID:25631240

  7. Trans-Proteomic Pipeline, a standardized data processing pipeline for large-scale reproducible proteomics informatics.

    PubMed

    Deutsch, Eric W; Mendoza, Luis; Shteynberg, David; Slagel, Joseph; Sun, Zhi; Moritz, Robert L

    2015-08-01

    Democratization of genomics technologies has enabled the rapid determination of genotypes. More recently the democratization of comprehensive proteomics technologies is enabling the determination of the cellular phenotype and the molecular events that define its dynamic state. Core proteomic technologies include MS to define protein sequence, protein:protein interactions, and protein PTMs. Key enabling technologies for proteomics are bioinformatic pipelines to identify, quantitate, and summarize these events. The Trans-Proteomics Pipeline (TPP) is a robust open-source standardized data processing pipeline for large-scale reproducible quantitative MS proteomics. It supports all major operating systems and instrument vendors via open data formats. Here, we provide a review of the overall proteomics workflow supported by the TPP, its major tools, and how it can be used in its various modes from desktop to cloud computing. We describe new features for the TPP, including data visualization functionality. We conclude by describing some common perils that affect the analysis of MS/MS datasets, as well as some major upcoming features. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Enablers and Barriers to Large-Scale Uptake of Improved Solid Fuel Stoves: A Systematic Review

    PubMed Central

    Puzzolo, Elisa; Stanistreet, Debbi; Pope, Daniel; Bruce, Nigel G.

    2013-01-01

    Background: Globally, 2.8 billion people rely on household solid fuels. Reducing the resulting adverse health, environmental, and development consequences will involve transitioning through a mix of clean fuels and improved solid fuel stoves (IS) of demonstrable effectiveness. To date, achieving uptake of IS has presented significant challenges. Objectives: We performed a systematic review of factors that enable or limit large-scale uptake of IS in low- and middle-income countries. Methods: We conducted systematic searches through multidisciplinary databases, specialist websites, and consulting experts. The review drew on qualitative, quantitative, and case studies and used standardized methods for screening, data extraction, critical appraisal, and synthesis. We summarized our findings as “factors” relating to one of seven domains—fuel and technology characteristics; household and setting characteristics; knowledge and perceptions; finance, tax, and subsidy aspects; market development; regulation, legislation, and standards; programmatic and policy mechanisms—and also recorded issues that impacted equity. Results: We identified 31 factors influencing uptake from 57 studies conducted in Asia, Africa, and Latin America. All domains matter. Although factors such as offering technologies that meet household needs and save fuel, user training and support, effective financing, and facilitative government action appear to be critical, none guarantee success: All factors can be influential, depending on context. The nature of available evidence did not permit further prioritization. Conclusions: Achieving adoption and sustained use of IS at a large scale requires that all factors, spanning household/community and program/societal levels, be assessed and supported by policy. We propose a planning tool that would aid this process and suggest further research to incorporate an evaluation of effectiveness. Citation: Rehfuess EA, Puzzolo E, Stanistreet D, Pope D, Bruce NG. 2014. Enablers and barriers to large-scale uptake of improved solid fuel stoves: a systematic review. Environ Health Perspect 122:120–130; http://dx.doi.org/10.1289/ehp.1306639 PMID:24300100

  9. Scalable Visual Analytics of Massive Textual Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.

    2007-04-01

    This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.

  10. Single-chip microprocessor that communicates directly using light

    NASA Astrophysics Data System (ADS)

    Sun, Chen; Wade, Mark T.; Lee, Yunsup; Orcutt, Jason S.; Alloatti, Luca; Georgas, Michael S.; Waterman, Andrew S.; Shainline, Jeffrey M.; Avizienis, Rimas R.; Lin, Sen; Moss, Benjamin R.; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H.; Cook, Henry M.; Ou, Albert J.; Leu, Jonathan C.; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J.; Popović, Miloš A.; Stojanović, Vladimir M.

    2015-12-01

    Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems—from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices8. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a ‘zero-change’ approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.

  11. Single-chip microprocessor that communicates directly using light.

    PubMed

    Sun, Chen; Wade, Mark T; Lee, Yunsup; Orcutt, Jason S; Alloatti, Luca; Georgas, Michael S; Waterman, Andrew S; Shainline, Jeffrey M; Avizienis, Rimas R; Lin, Sen; Moss, Benjamin R; Kumar, Rajesh; Pavanello, Fabio; Atabaki, Amir H; Cook, Henry M; Ou, Albert J; Leu, Jonathan C; Chen, Yu-Hsin; Asanović, Krste; Ram, Rajeev J; Popović, Miloš A; Stojanović, Vladimir M

    2015-12-24

    Data transport across short electrical wires is limited by both bandwidth and power density, which creates a performance bottleneck for semiconductor microchips in modern computer systems--from mobile phones to large-scale data centres. These limitations can be overcome by using optical communications based on chip-scale electronic-photonic systems enabled by silicon-based nanophotonic devices. However, combining electronics and photonics on the same chip has proved challenging, owing to microchip manufacturing conflicts between electronics and photonics. Consequently, current electronic-photonic chips are limited to niche manufacturing processes and include only a few optical devices alongside simple circuits. Here we report an electronic-photonic system on a single chip integrating over 70 million transistors and 850 photonic components that work together to provide logic, memory, and interconnect functions. This system is a realization of a microprocessor that uses on-chip photonic devices to directly communicate with other chips using light. To integrate electronics and photonics at the scale of a microprocessor chip, we adopt a 'zero-change' approach to the integration of photonics. Instead of developing a custom process to enable the fabrication of photonics, which would complicate or eliminate the possibility of integration with state-of-the-art transistors at large scale and at high yield, we design optical devices using a standard microelectronics foundry process that is used for modern microprocessors. This demonstration could represent the beginning of an era of chip-scale electronic-photonic systems with the potential to transform computing system architectures, enabling more powerful computers, from network infrastructure to data centres and supercomputers.

  12. Linking crop yield anomalies to large-scale atmospheric circulation in Europe.

    PubMed

    Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J

    2017-06-15

    Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.

  13. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  14. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  15. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  16. Large space telescope engineering scale model optical design

    NASA Technical Reports Server (NTRS)

    Facey, T. A.

    1973-01-01

    The objective is to develop the detailed design and tolerance data for the LST engineering scale model optical system. This will enable MSFC to move forward to the optical element procurement phase and also to evaluate tolerances, manufacturing requirements, assembly/checkout procedures, reliability, operational complexity, stability requirements of the structure and thermal system, and the flexibility to change and grow.

  17. Eavesdropping on the Arctic: Automated bioacoustics reveal dynamics in songbird breeding phenology.

    PubMed

    Oliver, Ruth Y; Ellis, Daniel P W; Chmura, Helen E; Krause, Jesse S; Pérez, Jonathan H; Sweet, Shannan K; Gough, Laura; Wingfield, John C; Boelman, Natalie T

    2018-06-01

    Bioacoustic networks could vastly expand the coverage of wildlife monitoring to complement satellite observations of climate and vegetation. This approach would enable global-scale understanding of how climate change influences phenomena such as migratory timing of avian species. The enormous data sets that autonomous recorders typically generate demand automated analyses that remain largely undeveloped. We devised automated signal processing and machine learning approaches to estimate dates on which songbird communities arrived at arctic breeding grounds. Acoustically estimated dates agreed well with those determined via traditional surveys and were strongly related to the landscape's snow-free dates. We found that environmental conditions heavily influenced daily variation in songbird vocal activity, especially before egg laying. Our novel approaches demonstrate that variation in avian migratory arrival can be detected autonomously. Large-scale deployment of this innovation in wildlife monitoring would enable the coverage necessary to assess and forecast changes in bird migration in the face of climate change.

  18. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    PubMed

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  19. DEM GPU studies of industrial scale particle simulations for granular flow civil engineering applications

    NASA Astrophysics Data System (ADS)

    Pizette, Patrick; Govender, Nicolin; Wilke, Daniel N.; Abriak, Nor-Edine

    2017-06-01

    The use of the Discrete Element Method (DEM) for industrial civil engineering industrial applications is currently limited due to the computational demands when large numbers of particles are considered. The graphics processing unit (GPU) with its highly parallelized hardware architecture shows potential to enable solution of civil engineering problems using discrete granular approaches. We demonstrate in this study the pratical utility of a validated GPU-enabled DEM modeling environment to simulate industrial scale granular problems. As illustration, the flow discharge of storage silos using 8 and 17 million particles is considered. DEM simulations have been performed to investigate the influence of particle size (equivalent size for the 20/40-mesh gravel) and induced shear stress for two hopper shapes. The preliminary results indicate that the shape of the hopper significantly influences the discharge rates for the same material. Specifically, this work shows that GPU-enabled DEM modeling environments can model industrial scale problems on a single portable computer within a day for 30 seconds of process time.

  20. A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2018-04-01

    We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.

  1. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model

    PubMed Central

    van Albada, Sacha J.; Rowley, Andrew G.; Senk, Johanna; Hopkins, Michael; Schmidt, Maximilian; Stokes, Alan B.; Lester, David R.; Diesmann, Markus; Furber, Steve B.

    2018-01-01

    The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks. PMID:29875620

  2. Microwave sensing technology issues related to a global change technology architecture trade study

    NASA Technical Reports Server (NTRS)

    Campbell, Thomas G.; Shiue, Jim; Connolly, Denis; Woo, Ken

    1991-01-01

    The objectives are to enable the development of lighter and less power consuming, high resolution microwave sensors which will operate at frequencies from 1 to 200 GHz. These systems will use large aperture antenna systems (both reflector and phased arrays) capable of wide scan angle, high polarization purity, and utilize sidelobe suppression techniques as required. Essentially, the success of this technology program will enable high resolution microwave radiometers from geostationary orbit, lightweight and more efficient radar systems from low Earth orbit, and eliminate mechanical scanning methods to the fullest extent possible; a main source of platform instability in large space systems. The Global Change Technology Initiative (GCTI) will develop technology which will enable the use of satellite systems for Earth observations on a global scale.

  3. Integration and segregation of large-scale brain networks during short-term task automatization

    PubMed Central

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-01-01

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095

  4. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  5. Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline*

    PubMed Central

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W.; Moritz, Robert L.

    2015-01-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. PMID:25418363

  6. Processing shotgun proteomics data on the Amazon cloud with the trans-proteomic pipeline.

    PubMed

    Slagel, Joseph; Mendoza, Luis; Shteynberg, David; Deutsch, Eric W; Moritz, Robert L

    2015-02-01

    Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah

    Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has tomore » gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a scaling study that compares instrumented ROSS simulations with their noninstrumented counterparts in order to determine the amount of perturbation when running at different simulation scales.« less

  8. Rocket University at KSC

    NASA Technical Reports Server (NTRS)

    Sullivan, Steven J.

    2014-01-01

    "Rocket University" is an exciting new initiative at Kennedy Space Center led by NASA's Engineering and Technology Directorate. This hands-on experience has been established to develop, refine & maintain targeted flight engineering skills to enable the Agency and KSC strategic goals. Through "RocketU", KSC is developing a nimble, rapid flight engineering life cycle systems knowledge base. Ongoing activities in RocketU develop and test new technologies and potential customer systems through small scale vehicles, build and maintain flight experience through balloon and small-scale rocket missions, and enable a revolving fresh perspective of engineers with hands on expertise back into the large scale NASA programs, providing a more experienced multi-disciplined set of systems engineers. This overview will define the Program, highlight aspects of the training curriculum, and identify recent accomplishments and activities.

  9. PASTA for Proteins.

    PubMed

    Collins, Kodi; Warnow, Tandy

    2018-06-19

    PASTA is a multiple sequence method that uses divide-and-conquer plus iteration to enable base alignment methods to scale with high accuracy to large sequence datasets. By default, PASTA included MAFFT L-INS-i; our new extension of PASTA enables the use of MAFFT G-INS-i, MAFFT Homologs, CONTRAlign, and ProbCons. We analyzed the performance of each base method and PASTA using these base methods on 224 datasets from BAliBASE 4 with at least 50 sequences. We show that PASTA enables the most accurate base methods to scale to larger datasets at reduced computational effort, and generally improves alignment and tree accuracy on the largest BAliBASE datasets. PASTA is available at https://github.com/kodicollins/pasta and has also been integrated into the original PASTA repository at https://github.com/smirarab/pasta. Supplementary data are available at Bioinformatics online.

  10. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  11. Unmanned Aircraft System (UAS) Traffic Management (UTM): Enabling Civilian Low-Altitude Airspace and Unmanned Aerial System Operations

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal Hemchandra

    2016-01-01

    Just a year ago we laid out the UTM challenges and NASA's proposed solutions. During the past year NASA's goal continues to be to conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line-of-sight UAS operations in the low-altitude airspace. Significant progress has been made, and NASA is continuing to move forward.

  12. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  13. Large-scale gene function analysis with the PANTHER classification system.

    PubMed

    Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D

    2013-08-01

    The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.

  14. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data

    PubMed Central

    Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.

    2017-01-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896

  15. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.

    PubMed

    Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V

    2016-08-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.

  16. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE PAGES

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    2017-01-07

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  17. Wind Power Innovation Enables Shift to Utility-Scale - Continuum Magazine

    Science.gov Websites

    the 1930s, a farmer in South Dakota built a small wind turbine on his farm, generating enough enough electricity to power thousands of homes. Aerial photo of large wind turbine with mountains in the background. Aerial view of the Siemens utility-scale wind turbine at the National Wind Technology Center

  18. Globus | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.

  19. DNA-encoded chemistry: enabling the deeper sampling of chemical space.

    PubMed

    Goodnow, Robert A; Dumelin, Christoph E; Keefe, Anthony D

    2017-02-01

    DNA-encoded chemical library technologies are increasingly being adopted in drug discovery for hit and lead generation. DNA-encoded chemistry enables the exploration of chemical spaces four to five orders of magnitude more deeply than is achievable by traditional high-throughput screening methods. Operation of this technology requires developing a range of capabilities including aqueous synthetic chemistry, building block acquisition, oligonucleotide conjugation, large-scale molecular biological transformations, selection methodologies, PCR, sequencing, sequence data analysis and the analysis of large chemistry spaces. This Review provides an overview of the development and applications of DNA-encoded chemistry, highlighting the challenges and future directions for the use of this technology.

  20. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  1. Highly multiplexed targeted proteomics using precise control of peptide retention time.

    PubMed

    Gallien, Sebastien; Peterman, Scott; Kiyonami, Reiko; Souady, Jamal; Duriez, Elodie; Schoen, Alan; Domon, Bruno

    2012-04-01

    Large-scale proteomics applications using SRM analysis on triple quadrupole mass spectrometers present new challenges to LC-MS/MS experimental design. Despite the automation of building large-scale LC-SRM methods, the increased numbers of targeted peptides can compromise the balance between sensitivity and selectivity. To facilitate large target numbers, time-scheduled SRM transition acquisition is performed. Previously published results have demonstrated incorporation of a well-characterized set of synthetic peptides enabled chromatographic characterization of the elution profile for most endogenous peptides. We have extended this application of peptide trainer kits to not only build SRM methods but to facilitate real-time elution profile characterization that enables automated adjustment of the scheduled detection windows. Incorporation of dynamic retention time adjustments better facilitate targeted assays lasting several days without the need for constant supervision. This paper provides an overview of how the dynamic retention correction approach identifies and corrects for commonly observed LC variations. This adjustment dramatically improves robustness in targeted discovery experiments as well as routine quantification experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Self-Reacting Friction Stir Welding for Aluminum Complex Curvature Applications

    NASA Technical Reports Server (NTRS)

    Brown, Randy J.; Martin, W.; Schneider, J.; Hartley, P. J.; Russell, Carolyn; Lawless, Kirby; Jones, Chip

    2003-01-01

    This viewgraph representation provides an overview of sucessful research conducted by Lockheed Martin and NASA to develop an advanced self-reacting friction stir technology for complex curvature aluminum alloys. The research included weld process development for 0.320 inch Al 2219, sucessful transfer from the 'lab' scale to the production scale tool and weld quality exceeding strenght goals. This process will enable development and implementation of large scale complex geometry hardware fabrication. Topics covered include: weld process development, weld process transfer, and intermediate hardware fabrication.

  3. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  4. Automated AFM for small-scale and large-scale surface profiling in CMP applications

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2018-03-01

    As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.

  5. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  6. Clipping the cosmos: the bias and bispectrum of large scale structure.

    PubMed

    Simpson, Fergus; James, J Berian; Heavens, Alan F; Heymans, Catherine

    2011-12-30

    A large fraction of the information collected by cosmological surveys is simply discarded to avoid length scales which are difficult to model theoretically. We introduce a new technique which enables the extraction of useful information from the bispectrum of galaxies well beyond the conventional limits of perturbation theory. Our results strongly suggest that this method increases the range of scales where the relation between the bispectrum and power spectrum in tree-level perturbation theory may be applied, from k(max) ∼ 0.1 to ∼0.7 hMpc(-1). This leads to correspondingly large improvements in the determination of galaxy bias. Since the clipped matter power spectrum closely follows the linear power spectrum, there is the potential to use this technique to probe the growth rate of linear perturbations and confront theories of modified gravity with observation.

  7. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  8. Environmental aspects of large-scale wind-power systems in the UK

    NASA Astrophysics Data System (ADS)

    Robson, A.

    1984-11-01

    Environmental issues relating to the introduction of large, MW-scale wind turbines at land-based sites in the UK are discussed. Noise, television interference, hazards to bird life, and visual effects are considered. Areas of uncertainty are identified, but enough is known from experience elsewhere in the world to enable the first UK machines to be introduced in a safe and environementally acceptable manner. Research to establish siting criteria more clearly, and significantly increase the potential wind-energy resource is mentioned. Studies of the comparative risk of energy systems are shown to be overpessimistic for UK wind turbines.

  9. Crafting threads of diblock copolymer micelles via flow-enabled self-assembly.

    PubMed

    Li, Bo; Han, Wei; Jiang, Beibei; Lin, Zhiqun

    2014-03-25

    Hierarchically assembled amphiphilic diblock copolymer micelles were exquisitely crafted over large areas by capitalizing on two concurrent self-assembling processes at different length scales, namely, the periodic threads composed of a monolayer or a bilayer of diblock copolymer micelles precisely positioned by flow-enabled self-assembly (FESA) on the microscopic scale and the self-assembly of amphiphilic diblock copolymer micelles into ordered arrays within an individual thread on the nanometer scale. A minimum spacing between two adjacent threads λmin was observed. A model was proposed to rationalize the relationship between the thread width and λmin. Such FESA of diblock copolymer micelles is remarkably controllable and easy to implement. It opens up possibilities for lithography-free positioning and patterning of diblock copolymer micelles for various applications in template fabrication of periodic inorganic nanostructures, nanoelectronics, optoelectronics, magnetic devices, and biotechnology.

  10. Organic field effect transistor with ultra high amplification

    NASA Astrophysics Data System (ADS)

    Torricelli, Fabrizio

    2016-09-01

    High-gain transistors are essential for the large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show organic transistors fabricated on plastic foils enabling unipolar amplifiers with ultra-gain. The proposed approach is general and opens up new opportunities for ultra-large signal amplification in organic circuits and sensors.

  11. A Single-use Strategy to Enable Manufacturing of Affordable Biologics.

    PubMed

    Jacquemart, Renaud; Vandersluis, Melissa; Zhao, Mochao; Sukhija, Karan; Sidhu, Navneet; Stout, Jim

    2016-01-01

    The current processing paradigm of large manufacturing facilities dedicated to single product production is no longer an effective approach for best manufacturing practices. Increasing competition for new indications and the launch of biosimilars for the monoclonal antibody market have put pressure on manufacturers to produce at lower cost. Single-use technologies and continuous upstream processes have proven to be cost-efficient options to increase biomass production but as of today the adoption has been only minimal for the purification operations, partly due to concerns related to cost and scale-up. This review summarizes how a single-use holistic process and facility strategy can overcome scale limitations and enable cost-efficient manufacturing to support the growing demand for affordable biologics. Technologies enabling high productivity, right-sized, small footprint, continuous, and automated upstream and downstream operations are evaluated in order to propose a concept for the flexible facility of the future.

  12. High-content screening in microfluidic devices.

    PubMed

    Cheong, Raymond; Paliwal, Saurabh; Levchenko, Andre

    2010-08-01

    Miniaturization is the key to advancing the state of the art in high-content screening (HCS) in order to enable dramatic cost savings through reduced usage of expensive biochemical reagents and to enable large-scale screening on primary cells. Microfluidic technology offers the potential to enable HCS to be performed with an unprecedented degree of miniaturization. This perspective highlights a real-world example from the authors’ work of HCS assays implemented in a highly miniaturized microfluidic format. The advantages of this technology are discussed, including cost savings, high-throughput screening on primary cells, improved accuracy, the ability to study complex time-varying stimuli, and ease of automation, integration and scaling. The reader will understand the capabilities of anew microfluidics-based platform for HCS and the advantages it provides over conventional plate-based HCS. Microfluidics technology will drive significant advancements and broader usage and applicability of HCS in drug discovery.

  13. Framework for Smart Electronic Health Record-Linked Predictive Models to Optimize Care for Complex Digestive Diseases

    DTIC Science & Technology

    2014-07-01

    mucos"x1; N Acquired Abnormality 4.7350 93696 76 0.85771...4. Roden DM, Pulley JM, Basford MA, et al. Development of a large- scale de-identified DNA biobank to enable personalized medicine. Clin Pharmacol...large healthcare system which incorporated clinical information from a 20-hospital setting (both aca- demic and community hospitals) of University of

  14. The iMoD display: considerations and challenges in fabricating MOEMS on large area glass substrates

    NASA Astrophysics Data System (ADS)

    Chui, Clarence; Floyd, Philip D.; Heald, David; Arbuckle, Brian; Lewis, Alan; Kothari, Manish; Cummings, Bill; Palmateer, Lauren; Bos, Jan; Chang, Daniel; Chiang, Jedi; Wang, Li-Ming; Pao, Edmon; Su, Fritz; Huang, Vincent; Lin, Wen-Jian; Tang, Wen-Chung; Yeh, Jia-Jiun; Chan, Chen-Chun; Shu, Fang-Ann; Ju, Yuh-Diing

    2007-01-01

    QUALCOMM has developed and transferred to manufacturing iMoD displays, a MEMS-based reflective display technology. The iMoD array architecture allows for development at wafer scale, yet easily scales up to enable fabrication on flat-panel display (FPD) lines. In this paper, we will describe the device operation, process flow and fabrication, technology transfer issues, and display performance.

  15. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  16. Equipment characterization to mitigate risks during transfers of cell culture manufacturing processes.

    PubMed

    Sieblist, Christian; Jenzsch, Marco; Pohlscheidt, Michael

    2016-08-01

    The production of monoclonal antibodies by mammalian cell culture in bioreactors up to 25,000 L is state of the art technology in the biotech industry. During the lifecycle of a product, several scale up activities and technology transfers are typically executed to enable the supply chain strategy of a global pharmaceutical company. Given the sensitivity of mammalian cells to physicochemical culture conditions, process and equipment knowledge are critical to avoid impacts on timelines, product quantity and quality. Especially, the fluid dynamics of large scale bioreactors versus small scale models need to be described, and similarity demonstrated, in light of the Quality by Design approach promoted by the FDA. This approach comprises an associated design space which is established during process characterization and validation in bench scale bioreactors. Therefore the establishment of predictive models and simulation tools for major operating conditions of stirred vessels (mixing, mass transfer, and shear force.), based on fundamental engineering principles, have experienced a renaissance in the recent years. This work illustrates the systematic characterization of a large variety of bioreactor designs deployed in a global manufacturing network ranging from small bench scale equipment to large scale production equipment (25,000 L). Several traditional methods to determine power input, mixing, mass transfer and shear force have been used to create a data base and identify differences for various impeller types and configurations in operating ranges typically applied in cell culture processes at manufacturing scale. In addition, extrapolation of different empirical models, e.g. Cooke et al. (Paper presented at the proceedings of the 2nd international conference of bioreactor fluid dynamics, Cranfield, UK, 1988), have been assessed for their validity in these operational ranges. Results for selected designs are shown and serve as examples of structured characterization to enable fast and agile process transfers, scale up and troubleshooting.

  17. Overcoming spatio-temporal limitations using dynamically scaled in vitro PC-MRI - A flow field comparison to true-scale computer simulations of idealized, stented and patient-specific left main bifurcations.

    PubMed

    Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett

    2016-08-01

    The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.

  18. Large scale in vivo recordings to study neuronal biophysics.

    PubMed

    Giocomo, Lisa M

    2015-06-01

    Over the last several years, technological advances have enabled researchers to more readily observe single-cell membrane biophysics in awake, behaving animals. Studies utilizing these technologies have provided important insights into the mechanisms generating functional neural codes in both sensory and non-sensory cortical circuits. Crucial for a deeper understanding of how membrane biophysics control circuit dynamics however, is a continued effort to move toward large scale studies of membrane biophysics, in terms of the numbers of neurons and ion channels examined. Future work faces a number of theoretical and technical challenges on this front but recent technological developments hold great promise for a larger scale understanding of how membrane biophysics contribute to circuit coding and computation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Scaling up digital circuit computation with DNA strand displacement cascades.

    PubMed

    Qian, Lulu; Winfree, Erik

    2011-06-03

    To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.

  20. A small-gap electrostatic micro-actuator for large deflections

    PubMed Central

    Conrad, Holger; Schenk, Harald; Kaiser, Bert; Langa, Sergiu; Gaudet, Matthieu; Schimmanz, Klaus; Stolz, Michael; Lenz, Miriam

    2015-01-01

    Common quasi-static electrostatic micro actuators have significant limitations in deflection due to electrode separation and unstable drive regions. State-of-the-art electrostatic actuators achieve maximum deflections of approximately one third of the electrode separation. Large electrode separation and high driving voltages are normally required to achieve large actuator movements. Here we report on an electrostatic actuator class, fabricated in a CMOS-compatible process, which allows high deflections with small electrode separation. The concept presented makes the huge electrostatic forces within nanometre small electrode separation accessible for large deflections. Electrostatic actuations that are larger than the electrode separation were measured. An analytical theory is compared with measurement and simulation results and enables closer understanding of these actuators. The scaling behaviour discussed indicates significant future improvement on actuator deflection. The presented driving concept enables the investigation and development of novel micro systems with a high potential for improved device and system performance. PMID:26655557

  1. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  2. Uniform standards for genome databases in forest and fruit trees

    USDA-ARS?s Scientific Manuscript database

    TreeGenes and tfGDR serve the international forestry and fruit tree genomics research communities, respectively. These databases hold similar sequence data and provide resources for the submission and recovery of this information in order to enable comparative genomics research. Large-scale genotype...

  3. Enhanced Vehicle Simulation Tool Enables Wider Array of Analyses | News |

    Science.gov Websites

    of vehicle types, including conventional vehicles, electric-drive vehicles, and fuel cell vehicles types," said NREL Senior Engineer Aaron Brooker. FASTSim facilitates large-scale evaluation of and on-road performance. Learn more about NREL's sustainable transportation research.

  4. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  5. Eavesdropping on the Arctic: Automated bioacoustics reveal dynamics in songbird breeding phenology

    PubMed Central

    Ellis, Daniel P. W.; Pérez, Jonathan H.; Wingfield, John C.; Boelman, Natalie T.

    2018-01-01

    Bioacoustic networks could vastly expand the coverage of wildlife monitoring to complement satellite observations of climate and vegetation. This approach would enable global-scale understanding of how climate change influences phenomena such as migratory timing of avian species. The enormous data sets that autonomous recorders typically generate demand automated analyses that remain largely undeveloped. We devised automated signal processing and machine learning approaches to estimate dates on which songbird communities arrived at arctic breeding grounds. Acoustically estimated dates agreed well with those determined via traditional surveys and were strongly related to the landscape’s snow-free dates. We found that environmental conditions heavily influenced daily variation in songbird vocal activity, especially before egg laying. Our novel approaches demonstrate that variation in avian migratory arrival can be detected autonomously. Large-scale deployment of this innovation in wildlife monitoring would enable the coverage necessary to assess and forecast changes in bird migration in the face of climate change. PMID:29938220

  6. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    PubMed

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  7. Redox Flow Batteries, Hydrogen and Distributed Storage.

    PubMed

    Dennison, C R; Vrubel, Heron; Amstutz, Véronique; Peljo, Pekka; Toghill, Kathryn E; Girault, Hubert H

    2015-01-01

    Social, economic, and political pressures are causing a shift in the global energy mix, with a preference toward renewable energy sources. In order to realize widespread implementation of these resources, large-scale storage of renewable energy is needed. Among the proposed energy storage technologies, redox flow batteries offer many unique advantages. The primary limitation of these systems, however, is their limited energy density which necessitates very large installations. In order to enhance the energy storage capacity of these systems, we have developed a unique dual-circuit architecture which enables two levels of energy storage; first in the conventional electrolyte, and then through the formation of hydrogen. Moreover, we have begun a pilot-scale demonstration project to investigate the scalability and technical readiness of this approach. This combination of conventional energy storage and hydrogen production is well aligned with the current trajectory of modern energy and mobility infrastructure. The combination of these two means of energy storage enables the possibility of an energy economy dominated by renewable resources.

  8. Large-scale microfluidics providing high-resolution and high-throughput screening of Caenorhabditis elegans poly-glutamine aggregation model

    NASA Astrophysics Data System (ADS)

    Mondal, Sudip; Hegarty, Evan; Martin, Chris; Gökçe, Sertan Kutal; Ghorashian, Navid; Ben-Yakar, Adela

    2016-10-01

    Next generation drug screening could benefit greatly from in vivo studies, using small animal models such as Caenorhabditis elegans for hit identification and lead optimization. Current in vivo assays can operate either at low throughput with high resolution or with low resolution at high throughput. To enable both high-throughput and high-resolution imaging of C. elegans, we developed an automated microfluidic platform. This platform can image 15 z-stacks of ~4,000 C. elegans from 96 different populations using a large-scale chip with a micron resolution in 16 min. Using this platform, we screened ~100,000 animals of the poly-glutamine aggregation model on 25 chips. We tested the efficacy of ~1,000 FDA-approved drugs in improving the aggregation phenotype of the model and identified four confirmed hits. This robust platform now enables high-content screening of various C. elegans disease models at the speed and cost of in vitro cell-based assays.

  9. 3D printing via ambient reactive extrusion

    DOE PAGES

    Rios, Orlando; Carter, William G.; Post, Brian K.; ...

    2018-03-14

    Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less

  10. 3D printing via ambient reactive extrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, Orlando; Carter, William G.; Post, Brian K.

    Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less

  11. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  12. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells

    PubMed Central

    Osiecki, Michael J.; Michl, Thomas D.; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B.; Griesser, Hans J.; Doran, Michael R.

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs. PMID:26660475

  13. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells.

    PubMed

    Osiecki, Michael J; Michl, Thomas D; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B; Griesser, Hans J; Doran, Michael R

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.

  14. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  15. Electron-Transfer/Higher-Energy Collision Dissociation (EThcD)-Enabled Intact Glycopeptide/Glycoproteome Characterization

    NASA Astrophysics Data System (ADS)

    Yu, Qing; Wang, Bowen; Chen, Zhengwei; Urabe, Go; Glover, Matthew S.; Shi, Xudong; Guo, Lian-Wang; Kent, K. Craig; Li, Lingjun

    2017-09-01

    Protein glycosylation, one of the most heterogeneous post-translational modifications, can play a major role in cellular signal transduction and disease progression. Traditional mass spectrometry (MS)-based large-scale glycoprotein sequencing studies heavily rely on identifying enzymatically released glycans and their original peptide backbone separately, as there is no efficient fragmentation method to produce unbiased glycan and peptide product ions simultaneously in a single spectrum, and that can be conveniently applied to high throughput glycoproteome characterization, especially for N-glycopeptides, which can have much more branched glycan side chains than relatively less complex O-linked glycans. In this study, a redefined electron-transfer/higher-energy collision dissociation (EThcD) fragmentation scheme is applied to incorporate both glycan and peptide fragments in one single spectrum, enabling complete information to be gathered and great microheterogeneity details to be revealed. Fetuin was first utilized to prove the applicability with 19 glycopeptides and corresponding five glycosylation sites identified. Subsequent experiments tested its utility for human plasma N-glycoproteins. Large-scale studies explored N-glycoproteomics in rat carotid arteries over the course of restenosis progression to investigate the potential role of glycosylation. The integrated fragmentation scheme provides a powerful tool for the analysis of intact N-glycopeptides and N-glycoproteomics. We also anticipate this approach can be readily applied to large-scale O-glycoproteome characterization. [Figure not available: see fulltext.

  16. Large-scale virtual screening on public cloud resources with Apache Spark.

    PubMed

    Capuccini, Marco; Ahmed, Laeeq; Schaal, Wesley; Laure, Erwin; Spjuth, Ola

    2017-01-01

    Structure-based virtual screening is an in-silico method to screen a target receptor against a virtual molecular library. Applying docking-based screening to large molecular libraries can be computationally expensive, however it constitutes a trivially parallelizable task. Most of the available parallel implementations are based on message passing interface, relying on low failure rate hardware and fast network connection. Google's MapReduce revolutionized large-scale analysis, enabling the processing of massive datasets on commodity hardware and cloud resources, providing transparent scalability and fault tolerance at the software level. Open source implementations of MapReduce include Apache Hadoop and the more recent Apache Spark. We developed a method to run existing docking-based screening software on distributed cloud resources, utilizing the MapReduce approach. We benchmarked our method, which is implemented in Apache Spark, docking a publicly available target receptor against [Formula: see text]2.2 M compounds. The performance experiments show a good parallel efficiency (87%) when running in a public cloud environment. Our method enables parallel Structure-based virtual screening on public cloud resources or commodity computer clusters. The degree of scalability that we achieve allows for trying out our method on relatively small libraries first and then to scale to larger libraries. Our implementation is named Spark-VS and it is freely available as open source from GitHub (https://github.com/mcapuccini/spark-vs).Graphical abstract.

  17. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  18. Simulation Based Exploration of Critical Zone Dynamics in Intensively Managed Landscapes

    NASA Astrophysics Data System (ADS)

    Kumar, P.

    2017-12-01

    The advent of high-resolution measurements of topographic and (vertical) vegetation features using areal LiDAR are enabling us to resolve micro-scale ( 1m) landscape structural characteristics over large areas. Availability of hyperspectral measurements is further augmenting these LiDAR data by enabling the biogeochemical characterization of vegetation and soils at unprecedented spatial resolutions ( 1-10m). Such data have opened up novel opportunities for modeling Critical Zone processes and exploring questions that were not possible before. We show how an integrated 3-D model at 1m grid resolution can enable us to resolve micro-topographic and ecological dynamics and their control on hydrologic and biogeochemical processes over large areas. We address the computational challenge of such detailed modeling by exploiting hybrid CPU and GPU computing technologies. We show results of moisture, biogeochemical, and vegetation dynamics from studies in the Critical Zone Observatory for Intensively managed Landscapes (IMLCZO) in the Midwestern United States.

  19. Dynamic contact network between ribosomal subunits enables rapid large-scale rotation during spontaneous translocation

    PubMed Central

    Bock, Lars V.; Blau, Christian; Vaiana, Andrea C.; Grubmüller, Helmut

    2015-01-01

    During ribosomal translation, the two ribosomal subunits remain associated through intersubunit bridges, despite rapid large-scale intersubunit rotation. The absence of large barriers hindering rotation is a prerequisite for rapid rotation. Here, we investigate how such a flat free-energy landscape is achieved, in particular considering the large shifts the bridges undergo at the periphery. The dynamics and energetics of the intersubunit contact network are studied using molecular dynamics simulations of the prokaryotic ribosome in intermediate states of spontaneous translocation. Based on observed occupancies of intersubunit contacts, residues were grouped into clusters. In addition to the central contact clusters, peripheral clusters were found to maintain strong steady interactions by changing contacts in the course of rotation. The peripheral B1 bridges are stabilized by a changing contact pattern of charged residues that adapts to the rotational state. In contrast, steady strong interactions of the B4 bridge are ensured by the flexible helix H34 following the movement of protein S15. The tRNAs which span the subunits contribute to the intersubunit binding enthalpy to an almost constant degree, despite their different positions in the ribosome. These mechanisms keep the intersubunit interaction strong and steady during rotation, thereby preventing dissociation and enabling rapid rotation. PMID:26109353

  20. Multi-scale multi-point observation of dipolarization in the near-Earth's magnetotail

    NASA Astrophysics Data System (ADS)

    Nakamura, R.; Varsani, A.; Genestreti, K.; Nakamura, T.; Baumjohann, W.; Birn, J.; Le Contel, O.; Nagai, T.

    2017-12-01

    We report on evolution of the dipolarization in the near-Earth plasma sheet during two intense substorms based on observations when the four spacecraft of the Magnetospheric Multiscale (MMS) together with GOES and Geotail were located in the near Earth magnetotail. These multiple spacecraft together with the ground-based magnetogram enabled to obtain the location of the large- scale substorm current wedge (SCW) and overall changes in the plasma sheet configuration. MMS was located in the southern hemisphere at the outer plasma sheet and observed fast flow disturbances associated with dipolarizations. The high time-resolution measurements from MMS enable us to detect the rapid motion of the field structures and the flow disturbances separately and to resolve signatures below the ion-scales. We found small-scale transient field-aligned current sheets associated with upward streaming cold plasmas and Hall-current layers in the fast flow shear region. Observations of these current structures are compared with simulations of reconnection jets.

  1. Overview of Accelerator Applications in Energy

    NASA Astrophysics Data System (ADS)

    Garnett, Robert W.; Sheffield, Richard L.

    An overview of the application of accelerators and accelerator technology in energy is presented. Applications span a broad range of cost, size, and complexity and include large-scale systems requiring high-power or high-energy accelerators to drive subcritical reactors for energy production or waste transmutation, as well as small-scale industrial systems used to improve oil and gas exploration and production. The enabling accelerator technologies will also be reviewed and future directions discussed.

  2. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  3. Practices and Strategies of Distributed Knowledge Collaboration

    ERIC Educational Resources Information Center

    Kudaravalli, Srinivas

    2010-01-01

    Information Technology is enabling large-scale, distributed collaboration across many different kinds of boundaries. Researchers have used the label new organizational forms to describe such collaborations and suggested that they are better able to meet the demands of flexibility, speed and adaptability that characterize the knowledge economy.…

  4. Emergent Network Defense

    ERIC Educational Resources Information Center

    Crane, Earl Newell

    2013-01-01

    The research problem that inspired this effort is the challenge of managing the security of systems in large-scale heterogeneous networked environments. Human intervention is slow and limited: humans operate at much slower speeds than networked computer communications and there are few humans associated with each network. Enabling each node in the…

  5. Thinking big: linking rivers to landscapes

    Treesearch

    Joan O’Callaghan; Ashley E. Steel; Kelly M. Burnett

    2012-01-01

    Exploring relationships between landscape characteristics and rivers is an emerging field, enabled by the proliferation of satellite date, advances in statistical analysis, and increased emphasis on large-scale monitoring. Landscapes features such as road networks, underlying geology, and human developments, determine the characteristics of the rivers flowing through...

  6. Activity-based protein profiling for biochemical pathway discovery in cancer

    PubMed Central

    Nomura, Daniel K.; Dix, Melissa M.; Cravatt, Benjamin F.

    2011-01-01

    Large-scale profiling methods have uncovered numerous gene and protein expression changes that correlate with tumorigenesis. However, determining the relevance of these expression changes and which biochemical pathways they affect has been hindered by our incomplete understanding of the proteome and its myriad functions and modes of regulation. Activity-based profiling platforms enable both the discovery of cancer-relevant enzymes and selective pharmacological probes to perturb and characterize these proteins in tumour cells. When integrated with other large-scale profiling methods, activity-based proteomics can provide insight into the metabolic and signalling pathways that support cancer pathogenesis and illuminate new strategies for disease diagnosis and treatment. PMID:20703252

  7. Integrating Cloud-Computing-Specific Model into Aircraft Design

    NASA Astrophysics Data System (ADS)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  8. Significance of dual polarized long wavelength radar for terrain analysis

    NASA Technical Reports Server (NTRS)

    Macdonald, H. C.; Waite, W. P.

    1978-01-01

    Long wavelength systems with improved penetration capability have been considered to have the potential for minimizing the vegetation contribution and enhancing the surface return variations. L-band imagery of the Arkansas geologic test site provides confirmatory evidence of this effect. However, the increased wavelength increases the sensitivity to larger scale structure at relatively small incidence angles. The regularity of agricultural and urban scenes provides large components in the low frequency-large scale portion of the roughness spectrum that are highly sensitive to orientation. The addition of a cross polarized channel is shown to enable the interpreter to distinguish vegetation and orientational perturbations in the surface return.

  9. Numerical Propulsion System Simulation (NPSS) 1999 Industry Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Greg; Naiman, Cynthia; Evans, Austin

    2000-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. In addition, the paper contains a summary of the feedback received from industry partners in the development effort and the actions taken over the past year to respond to that feedback. The NPSS development was supported in FY99 by the High Performance Computing and Communications Program.

  10. Large-scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU).

    PubMed

    Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin

    2015-01-15

    Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Large temporal scale and capacity subsurface bulk energy storage with CO2

    NASA Astrophysics Data System (ADS)

    Saar, M. O.; Fleming, M. R.; Adams, B. M.; Ogland-Hand, J.; Nelson, E. S.; Randolph, J.; Sioshansi, R.; Kuehn, T. H.; Buscheck, T. A.; Bielicki, J. M.

    2017-12-01

    Decarbonizing energy systems by increasing the penetration of variable renewable energy (VRE) technologies requires efficient and short- to long-term energy storage. Very large amounts of energy can be stored in the subsurface as heat and/or pressure energy in order to provide both short- and long-term (seasonal) storage, depending on the implementation. This energy storage approach can be quite efficient, especially where geothermal energy is naturally added to the system. Here, we present subsurface heat and/or pressure energy storage with supercritical carbon dioxide (CO2) and discuss the system's efficiency, deployment options, as well as its advantages and disadvantages, compared to several other energy storage options. CO2-based subsurface bulk energy storage has the potential to be particularly efficient and large-scale, both temporally (i.e., seasonal) and spatially. The latter refers to the amount of energy that can be stored underground, using CO2, at a geologically conducive location, potentially enabling storing excess power from a substantial portion of the power grid. The implication is that it would be possible to employ centralized energy storage for (a substantial part of) the power grid, where the geology enables CO2-based bulk subsurface energy storage, whereas the VRE technologies (solar, wind) are located on that same power grid, where (solar, wind) conditions are ideal. However, this may require reinforcing the power grid's transmission lines in certain parts of the grid to enable high-load power transmission from/to a few locations.

  12. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    PubMed Central

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  13. Mechanisation of large-scale agricultural fields in developing countries - a review.

    PubMed

    Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila

    2016-09-01

    Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  14. Testing of the NASA Hypersonics Project Combined Cycle Engine Large Scale Inlet Mode Transition Experiment (CCE LlMX)

    NASA Technical Reports Server (NTRS)

    Saunders, J. D.; Stueber, T. J.; Thomas, S. R.; Suder, K. L.; Weir, L. J.; Sanders, B. W.

    2012-01-01

    Status on an effort to develop Turbine Based Combined Cycle (TBCC) propulsion is described. This propulsion technology can enable reliable and reusable space launch systems. TBCC propulsion offers improved performance and safety over rocket propulsion. The potential to realize aircraft-like operations and reduced maintenance are additional benefits. Among most the critical TBCC enabling technologies are: 1) mode transition from turbine to scramjet propulsion, 2) high Mach turbine engines and 3) TBCC integration. To address these TBCC challenges, the effort is centered on a propulsion mode transition experiment and includes analytical research. The test program, the Combined-Cycle Engine Large Scale Inlet Mode Transition Experiment (CCE LIMX), was conceived to integrate TBCC propulsion with proposed hypersonic vehicles. The goals address: (1) dual inlet operability and performance, (2) mode-transition sequences enabling a switch between turbine and scramjet flow paths, and (3) turbine engine transients during transition. Four test phases are planned from which a database can be used to both validate design and analysis codes and characterize operability and integration issues for TBCC propulsion. In this paper we discuss the research objectives, features of the CCE hardware and test plans, and status of the parametric inlet characterization testing which began in 2011. This effort is sponsored by the NASA Fundamental Aeronautics Hypersonics project

  15. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  16. Novel insights into host responses and the reproductive pathophysiology of type 2 porcine reproductive and respiratory syndrome (PRRS)

    USDA-ARS?s Scientific Manuscript database

    A large-scale challenge experiment using type 2 porcine reproductive and respiratory virus (PRRSV) provided new insights into the pathophysiology of reproductive PRRS in third-trimester pregnant gilts. Deep phenotyping enabled identification of maternal and fetal factors predictive of PRRS severity ...

  17. Workforce Development Analysis | Energy Analysis | NREL

    Science.gov Websites

    with customer service, construction, and electrical projects One-half of surveyed firms reported , training, and experience that will enable continued large-scale deployment of wind and solar technologies engineers; and project managers. Standardized education and training at all levels-primary school through

  18. Enabling the Interoperability of Large-Scale Legacy Systems

    DTIC Science & Technology

    2008-01-01

    information retrieval systems ( Salton and McGill 1983). We use this method because, in the schema mapping task, only one instance per class is...2001). A survey of approaches to automatic schema matching. The VLDB Journal, 10, 334-350. Salton , G., & McGill, M.J. (1983). Introduction to

  19. To the Cloud! A Grassroots Proposal to Accelerate Brain Science Discovery

    PubMed Central

    Vogelstein, Joshua T.; Mensh, Brett; Hausser, Michael; Spruston, Nelson; Evans, Alan; Kording, Konrad; Amunts, Katrin; Ebell, Christoph; Muller, Jeff; Telefont, Martin; Hill, Sean; Koushika, Sandhya P.; Cali, Corrado; Valdés-Sosa, Pedro Antonio; Littlewood, Peter; Koch, Christof; Saalfeld, Stephan; Kepecs, Adam; Peng, Hanchuan; Halchenko, Yaroslav O.; Kiar, Gregory; Poo, Mu-Ming; Poline, Jean-Baptiste; Milham, Michael P.; Schaffer, Alyssa Picchini; Gidron, Rafi; Okano, Hideyuki; Calhoun, Vince D; Chun, Miyoung; Kleissas, Dean M.; Vogelstein, R. Jacob; Perlman, Eric; Burns, Randal; Huganir, Richard; Miller, Michael I.

    2018-01-01

    The revolution in neuroscientific data acquisition is creating an analysis challenge. We propose leveraging cloud-computing technologies to enable large-scale neurodata storing, exploring, analyzing, and modeling. This utility will empower scientists globally to generate and test theories of brain function and dysfunction. PMID:27810005

  20. Molecular Precision at Micrometer Length Scales: Hierarchical Assembly of DNA-Protein Nanostructures.

    PubMed

    Schiffels, Daniel; Szalai, Veronika A; Liddle, J Alexander

    2017-07-25

    Robust self-assembly across length scales is a ubiquitous feature of biological systems but remains challenging for synthetic structures. Taking a cue from biology-where disparate molecules work together to produce large, functional assemblies-we demonstrate how to engineer microscale structures with nanoscale features: Our self-assembly approach begins by using DNA polymerase to controllably create double-stranded DNA (dsDNA) sections on a single-stranded template. The single-stranded DNA (ssDNA) sections are then folded into a mechanically flexible skeleton by the origami method. This process simultaneously shapes the structure at the nanoscale and directs the large-scale geometry. The DNA skeleton guides the assembly of RecA protein filaments, which provides rigidity at the micrometer scale. We use our modular design strategy to assemble tetrahedral, rectangular, and linear shapes of defined dimensions. This method enables the robust construction of complex assemblies, greatly extending the range of DNA-based self-assembly methods.

  1. From catchment scale hydrologic processes to numerical models and robust predictions of climate change impacts at regional scales

    NASA Astrophysics Data System (ADS)

    Wagener, T.

    2017-12-01

    Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.

  2. CLAST: CUDA implemented large-scale alignment search tool.

    PubMed

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  3. Transitioning a home telehealth project into a sustainable, large-scale service: a qualitative study.

    PubMed

    Wade, Victoria A; Taylor, Alan D; Kidd, Michael R; Carati, Colin

    2016-05-16

    This study was a component of the Flinders Telehealth in the Home project, which tested adding home telehealth to existing rehabilitation, palliative care and geriatric outreach services. Due to the known difficulty of transitioning telehealth projects services, a qualitative study was conducted to produce a preferred implementation approach for sustainable and large-scale operations, and a process model that offers practical advice for achieving this goal. Initially, semi-structured interviews were conducted with senior clinicians, health service managers and policy makers, and a thematic analysis of the interview transcripts was undertaken to identify the range of options for ongoing operations, plus the factors affecting sustainability. Subsequently, the interviewees and other decision makers attended a deliberative forum in which participants were asked to select a preferred model for future implementation. Finally, all data from the study was synthesised by the researchers to produce a process model. 19 interviews with senior clinicians, managers, and service development staff were conducted, finding strong support for home telehealth but a wide diversity of views on governance, models of clinical care, technical infrastructure operations, and data management. The deliberative forum worked through these options and recommended a collaborative consortium approach for large-scale implementation. The process model proposes that the key factor for large-scale implementation is leadership support, which is enabled by 1) showing solutions to the problems of service demand, budgetary pressure and the relationship between hospital and primary care, 2) demonstrating how home telehealth aligns with health service policies, and 3) achieving clinician acceptance through providing evidence of benefit and developing new models of clinical care. Two key actions to enable change were marketing telehealth to patients, clinicians and policy-makers, and building a community of practice. The implementation of home telehealth services is still in an early stage. Change agents and a community of practice can contribute by marketing telehealth, demonstrating policy alignment and providing potential solutions for difficult health services problems. This should assist health leaders to move from trials to large-scale services.

  4. Personality in 100,000 Words: A large-scale analysis of personality and word use among bloggers

    PubMed Central

    Yarkoni, Tal

    2010-01-01

    Previous studies have found systematic associations between personality and individual differences in word use. Such studies have typically focused on broad associations between major personality domains and aggregate word categories, potentially masking more specific associations. Here I report the results of a large-scale analysis of personality and word use in a large sample of blogs (N=694). The size of the dataset enabled pervasive correlations with personality to be identified for a broad range of lexical variables, including both aggregate word categories and individual English words. The results replicated category-level findings from previous offline studies, identified numerous novel associations at both a categorical and single-word level, and underscored the value of complementary approaches to the study of personality and word use. PMID:20563301

  5. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  6. High content screening in microfluidic devices

    PubMed Central

    Cheong, Raymond; Paliwal, Saurabh; Levchenko, Andre

    2011-01-01

    Importance of the field Miniaturization is key to advancing the state-of-the-art in high content screening (HCS), in order to enable dramatic cost savings through reduced usage of expensive biochemical reagents and to enable large-scale screening on primary cells. Microfluidic technology offers the potential to enable HCS to be performed with an unprecedented degree of miniaturization. Areas covered in this review This perspective highlights a real-world example from the authors’ work of HCS assays implemented in a highly miniaturized microfluidic format. Advantages of this technology are discussed, including cost savings, high throughput screening on primary cells, improved accuracy, the ability to study complex time-varying stimuli, and ease of automation, integration, and scaling. What the reader will gain The reader will understand the capabilities of a new microfluidics-based platform for HCS, and the advantages it provides over conventional plate-based HCS. Take home message Microfluidics technology will drive significant advancements and broader usage and applicability of HCS in drug discovery. PMID:21852997

  7. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  8. Characterizing multi-scale self-similar behavior and non-statistical properties of fluctuations in financial time series

    NASA Astrophysics Data System (ADS)

    Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.

    2011-11-01

    We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.

  9. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  10. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations

    NASA Technical Reports Server (NTRS)

    Sorensen, Danny C.

    1996-01-01

    Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.

  12. UTM Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal

    2017-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability.

  13. UTM Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2016-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability.

  14. Information Power Grid (IPG) Tutorial 2003

    NASA Technical Reports Server (NTRS)

    Meyers, George

    2003-01-01

    For NASA and the general community today Grid middleware: a) provides tools to access/use data sources (databases, instruments, ...); b) provides tools to access computing (unique and generic); c) Is an enabler of large scale collaboration. Dynamically responding to needs is a key selling point of a grid. Independent resources can be joined as appropriate to solve a problem. Provide tools to enable the building of a frameworks for application. Provide value added service to the NASA user base for utilizing resources on the grid in new and more efficient ways. Provides tools for development of Frameworks.

  15. Enabling Cross-Platform Clinical Decision Support through Web-Based Decision Support in Commercial Electronic Health Record Systems: Proposal and Evaluation of Initial Prototype Implementations

    PubMed Central

    Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku

    2013-01-01

    Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426

  16. Cryogenic Selective Surface - How Cold Can We Go?

    NASA Technical Reports Server (NTRS)

    Youngquist, Robert; Nurge, Mark

    2015-01-01

    Selective surfaces have wavelength dependent emissivitya bsorption. These surfaces can be designed to reflect solar radiation, while maximizing infrared emittance, yielding a cooling effect even in sunlight. On earth cooling to -50 C below ambient has been achieved, but in space, outside of the atmosphere, theory using ideal materials has predicted a maximum cooling to 40 K! If this result holds up for real world materials and conditions, then superconducting systems and cryogenic storage can be achieved in space without active cooling. Such a result would enable long term cryogenic storage in deep space and the use of large scale superconducting systems for such applications as galactic cosmic radiation (GCR) shielding and large scale energy storage.

  17. Environmental aspects of large-scale wind-power systems in the UK

    NASA Astrophysics Data System (ADS)

    Robson, A.

    1983-12-01

    Environmental issues relating to the introduction of large, MW-scale wind turbines at land-based sites in the U.K. are discussed. Areas of interest include noise, television interference, hazards to bird life and visual effects. A number of areas of uncertainty are identified, but enough is known from experience elsewhere in the world to enable the first U.K. machines to be introduced in a safe and environmentally acceptable manner. Research currently under way will serve to establish siting criteria more clearly, and could significantly increase the potential wind-energy resource. Certain studies of the comparative risk of energy systems are shown to be overpessimistic for U.K. wind turbines.

  18. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  19. Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware

    PubMed Central

    Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose

    2015-01-01

    Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839

  20. Femtosecond laser machining and lamination for large-area flexible organic microfluidic chips

    NASA Astrophysics Data System (ADS)

    Malek, C. Khan; Robert, L.; Salut, R.

    2009-04-01

    A hybrid process compatible with reel-to-reel manufacturing is developed for ultra low-cost large-scale manufacture of disposable microfluidic chips. It combines ultra-short laser microstructuring and lamination technology. Microchannels in polyester foils were formed using focused, high-intensity femtosecond laser pulses. Lamination using a commercial SU8-epoxy resist layer was used to seal the microchannel layer and cover foil. This hybrid process also enables heterogeneous material structuration and integration.

  1. Cognitive Control of Speech Perception across the Lifespan: A Large-Scale Cross-Sectional Dichotic Listening Study

    ERIC Educational Resources Information Center

    Westerhausen, René; Bless, Josef J.; Passow, Susanne; Kompus, Kristiina; Hugdahl, Kenneth

    2015-01-01

    The ability to use cognitive-control functions to regulate speech perception is thought to be crucial in mastering developmental challenges, such as language acquisition during childhood or compensation for sensory decline in older age, enabling interpersonal communication and meaningful social interactions throughout the entire life span.…

  2. Molecular Design of Multilayer Composites from Carbon Nanotubes

    DTIC Science & Technology

    2008-03-31

    approaches that will enable large scale and 5-30 times faster manufacturing of the LBL composites than traditional LBL: (1) dewetting method and (2...Films made by Dewetting Method Of Layer-By-Layer Assembly, Nano Letters 2007, 7(11), 3266-3273. Loh, K. J.; Lynch, J. P.; Shim, B. S.; Kotov, N. An

  3. Middle Managers in UK Higher Education Conceptualising Experiences in Support of Reflective Practice

    ERIC Educational Resources Information Center

    Birds, Rachel

    2014-01-01

    This paper examines the role of reflexivity in supporting middle managers in understanding and facilitating large-scale change management projects in their organisations. Utilising an example from a UK university, it is argued that the development of a conceptual model to fit local circumstances enables deeper understanding and better informed…

  4. Virtual Computing Laboratories: A Case Study with Comparisons to Physical Computing Laboratories

    ERIC Educational Resources Information Center

    Burd, Stephen D.; Seazzu, Alessandro F.; Conway, Christopher

    2009-01-01

    Current technology enables schools to provide remote or virtual computing labs that can be implemented in multiple ways ranging from remote access to banks of dedicated workstations to sophisticated access to large-scale servers hosting virtualized workstations. This paper reports on the implementation of a specific lab using remote access to…

  5. Strategies for Impact: Enabling E-Learning Project Initiatives

    ERIC Educational Resources Information Center

    Csete, Josephine; Evans, Jennifer

    2013-01-01

    Purpose: The paper aims to focus on institutional initiatives to embed e-learning in a university in Hong Kong, from 2006-12, through large-scale funding of 43 e-learning projects. It outlines the guiding principles behind the university's e-learning development and discusses the significance of various procedures and practices in project…

  6. Broadening the Boundaries of Psychology through Community Psychology

    ERIC Educational Resources Information Center

    Kagan, Carolyn

    2008-01-01

    This paper argues for community psychology to be included within the discipline boundaries of psychology. In doing this, it will enable psychology to begin to address some of the large scale social issues affecting people's well-being. It will be necessary, however, to incorporate aspects of other disciplines, make explicit the political…

  7. Parallel stitching of 2D materials

    DOE PAGES

    Ling, Xi; Wu, Lijun; Lin, Yuxuan; ...

    2016-01-27

    Diverse parallel stitched 2D heterostructures, including metal–semiconductor, semiconductor–semiconductor, and insulator–semiconductor, are synthesized directly through selective “sowing” of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. Lastly, the methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.

  8. From Networked Learning to Operational Practice: Constructing and Transferring Superintendent Knowledge in a Regional Instructional Rounds Network

    ERIC Educational Resources Information Center

    Travis, Timothy J.

    2015-01-01

    Instructional rounds are an emerging network structure with processes and protocols designed to develop superintendents' knowledge and skills in leading large-scale improvement, to enable superintendents to build an infrastructure that supports the work of improvement, to assist superintendents in distributing leadership throughout their district,…

  9. A Survey and Analysis of Access Control Architectures for XML Data

    DTIC Science & Technology

    2006-03-01

    13 4. XML Query Engines ...castle and the drawbridge over the moat. Extending beyond the visual analogy, there are many key components to the protection of information and...technology. While XML’s original intent was to enable large-scale electronic publishing over the internet, its functionality is firmly rooted in its

  10. Bringing Open Educational Practice to a Research-Intensive University: Prospects and Challenges

    ERIC Educational Resources Information Center

    Masterman, Elizabeth

    2016-01-01

    This article describes a small-scale study that explored the relationship between the pedagogical practices characterised as "open" and the existing model of undergraduate teaching and learning at a large research-intensive university (RIU). The aim was to determine the factors that might enable (conversely impede) the greater uptake of…

  11. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  12. Development of Nuclear Renewable Oil Shale Systems for Flexible Electricity and Reduced Fossil Fuel Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel Curtis; Charles Forsberg; Humberto Garcia

    2015-05-01

    We propose the development of Nuclear Renewable Oil Shale Systems (NROSS) in northern Europe, China, and the western United States to provide large supplies of flexible, dispatchable, very-low-carbon electricity and fossil fuel production with reduced CO2 emissions. NROSS are a class of large hybrid energy systems in which base-load nuclear reactors provide the primary energy used to produce shale oil from kerogen deposits and simultaneously provide flexible, dispatchable, very-low-carbon electricity to the grid. Kerogen is solid organic matter trapped in sedimentary shale, and large reserves of this resource, called oil shale, are found in northern Europe, China, and the westernmore » United States. NROSS couples electricity generation and transportation fuel production in a single operation, reduces lifecycle carbon emissions from the fuel produced, improves revenue for the nuclear plant, and enables a major shift toward a very-low-carbon electricity grid. NROSS will require a significant development effort in the United States, where kerogen resources have never been developed on a large scale. In Europe, however, nuclear plants have been used for process heat delivery (district heating), and kerogen use is familiar in certain countries. Europe, China, and the United States all have the opportunity to use large scale NROSS development to enable major growth in renewable generation and either substantially reduce or eliminate their dependence on foreign fossil fuel supplies, accelerating their transitions to cleaner, more efficient, and more reliable energy systems.« less

  13. Unique Testing Capabilities of the NASA Langley Transonic Dynamics Tunnel, an Exercise in Aeroelastic Scaling

    NASA Technical Reports Server (NTRS)

    Ivanco, Thomas G.

    2013-01-01

    NASA Langley Research Center's Transonic Dynamics Tunnel (TDT) is the world's most capable aeroelastic test facility. Its large size, transonic speed range, variable pressure capability, and use of either air or R-134a heavy gas as a test medium enable unparalleled manipulation of flow-dependent scaling quantities. Matching these scaling quantities enables dynamic similitude of a full-scale vehicle with a sub-scale model, a requirement for proper characterization of any dynamic phenomenon, and many static elastic phenomena. Select scaling parameters are presented in order to quantify the scaling advantages of TDT and the consequence of testing in other facilities. In addition to dynamic testing, the TDT is uniquely well-suited for high risk testing or for those tests that require unusual model mount or support systems. Examples of recently conducted dynamic tests requiring unusual model support are presented. In addition to its unique dynamic test capabilities, the TDT is also evaluated in its capability to conduct aerodynamic performance tests as a result of its flow quality. Results of flow quality studies and a comparison to a many other transonic facilities are presented. Finally, the ability of the TDT to support future NASA research thrusts and likely vehicle designs is discussed.

  14. Cosmology with CLASS

    NASA Astrophysics Data System (ADS)

    Watts, Duncan; CLASS Collaboration

    2018-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will use large-scale measurements of the polarized cosmic microwave background (CMB) to constrain the physics of inflation, reionization, and massive neutrinos. The experiment is designed to characterize the largest scales, which are inaccessible to most ground-based experiments, and remove Galactic foregrounds from the CMB maps. In this dissertation talk, I present simulations of CLASS data and demonstrate their ability to constrain the simplest single-field models of inflation and to reduce the uncertainty of the optical depth to reionization, τ, to near the cosmic variance limit, significantly improving on current constraints. These constraints will bring a qualitative shift in our understanding of standard ΛCDM cosmology. In particular, CLASS's measurement of τ breaks cosmological parameter degeneracies. Probes of large scale structure (LSS) test the effect of neutrino free-streaming at small scales, which depends on the mass of the neutrinos. CLASS's τ measurement, when combined with next-generation LSS and BAO measurements, will enable a 4σ detection of neutrino mass, compared with 2σ without CLASS data.. I will also briefly discuss the CLASS experiment's measurements of circular polarization of the CMB and the implications of the first-such near-all-sky map.

  15. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  16. Using electronic medical records to enable large-scale studies in psychiatry: treatment resistant depression as a model

    PubMed Central

    Perlis, R. H.; Iosifescu, D. V.; Castro, V. M.; Murphy, S. N.; Gainer, V. S.; Minnier, J.; Cai, T.; Goryachev, S.; Zeng, Q.; Gallagher, P. J.; Fava, M.; Weilburg, J. B.; Churchill, S. E.; Kohane, I. S.; Smoller, J. W.

    2013-01-01

    Background Electronic medical records (EMR) provide a unique opportunity for efficient, large-scale clinical investigation in psychiatry. However, such studies will require development of tools to define treatment outcome. Method Natural language processing (NLP) was applied to classify notes from 127 504 patients with a billing diagnosis of major depressive disorder, drawn from out-patient psychiatry practices affiliated with multiple, large New England hospitals. Classifications were compared with results using billing data (ICD-9 codes) alone and to a clinical gold standard based on chart review by a panel of senior clinicians. These cross-sectional classifications were then used to define longitudinal treatment outcomes, which were compared with a clinician-rated gold standard. Results Models incorporating NLP were superior to those relying on billing data alone for classifying current mood state (area under receiver operating characteristic curve of 0.85–0.88 v. 0.54–0.55). When these cross-sectional visits were integrated to define longitudinal outcomes and incorporate treatment data, 15% of the cohort remitted with a single antidepressant treatment, while 13% were identified as failing to remit despite at least two antidepressant trials. Non-remitting patients were more likely to be non-Caucasian (p<0.001). Conclusions The application of bioinformatics tools such as NLP should enable accurate and efficient determination of longitudinal outcomes, enabling existing EMR data to be applied to clinical research, including biomarker investigations. Continued development will be required to better address moderators of outcome such as adherence and co-morbidity. PMID:21682950

  17. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  18. XLinkDB 2.0: integrated, large-scale structural analysis of protein crosslinking data

    PubMed Central

    Schweppe, Devin K.; Zheng, Chunxiang; Chavez, Juan D.; Navare, Arti T.; Wu, Xia; Eng, Jimmy K.; Bruce, James E.

    2016-01-01

    Motivation: Large-scale chemical cross-linking with mass spectrometry (XL-MS) analyses are quickly becoming a powerful means for high-throughput determination of protein structural information and protein–protein interactions. Recent studies have garnered thousands of cross-linked interactions, yet the field lacks an effective tool to compile experimental data or access the network and structural knowledge for these large scale analyses. We present XLinkDB 2.0 which integrates tools for network analysis, Protein Databank queries, modeling of predicted protein structures and modeling of docked protein structures. The novel, integrated approach of XLinkDB 2.0 enables the holistic analysis of XL-MS protein interaction data without limitation to the cross-linker or analytical system used for the analysis. Availability and Implementation: XLinkDB 2.0 can be found here, including documentation and help: http://xlinkdb.gs.washington.edu/. Contact: jimbruce@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153666

  19. Remote magnetic actuation using a clinical scale system

    PubMed Central

    Stehning, Christian; Gleich, Bernhard

    2018-01-01

    Remote magnetic manipulation is a powerful technique for controlling devices inside the human body. It enables actuation and locomotion of tethered and untethered objects without the need for a local power supply. In clinical applications, it is used for active steering of catheters in medical interventions such as cardiac ablation for arrhythmia treatment and for steering of camera pills in the gastro-intestinal tract for diagnostic video acquisition. For these applications, specialized clinical-scale field applicators have been developed, which are rather limited in terms of field strength and flexibility of field application. For a general-purpose field applicator, flexible field generation is required at high field strengths as well as high field gradients to enable the generation of both torques and forces on magnetic devices. To date, this requirement has only been met by small-scale experimental systems. We have built a highly versatile clinical-scale field applicator that enables the generation of strong magnetic fields as well as strong field gradients over a large workspace. We demonstrate the capabilities of this coil-based system by remote steering of magnetic drills through gel and tissue samples with high torques on well-defined curved trajectories. We also give initial proof that, when equipped with high frequency transmit-receive coils, the machine is capable of real-time magnetic particle imaging while retaining a clinical-scale bore size. Our findings open the door for image-guided radiation-free remote magnetic control of devices at the clinical scale, which may be useful in minimally invasive diagnostic and therapeutic medical interventions. PMID:29494647

  20. The 'cube' meta-model for the information system of large health sector organizations--a (platform neutral) mapping tool to integrate information system development with changing business functions and organizational development.

    PubMed

    Balkányi, László

    2002-01-01

    To develop information systems (IS) in the changing environment of the health sector, a simple but throughout model, avoiding the techno-jargon of informatics, might be useful for the top management. A platform neutral, extensible, transparent conceptual model should be established. Limitations of current methods lead to a simple, but comprehensive mapping, in the form of a three-dimensional cube. The three 'orthogonal' views are (a) organization functionality, (b) organizational structures and (c) information technology. Each of the cube-sides is described according to its nature. This approach enables to define any kind of an IS component as a certain point/layer/domain of the cube and enables also the management to label all IS components independently form any supplier(s) and/or any specific platform. The model handles changes in organization structure, business functionality and the serving info-system independently form each other. Practical application extends to (a) planning complex, new ISs, (b) guiding development of multi-vendor, multi-site ISs, (c) supporting large-scale public procurement procedures and the contracting, implementation phase by establishing a platform neutral reference, (d) keeping an exhaustive inventory of an existing large-scale system, that handles non-tangible aspects of the IS.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less

  2. Spectral Quadrature method for accurate O ( N ) electronic structure calculations of metals and insulators

    DOE PAGES

    Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.

    2015-12-02

    We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less

  3. Sparse network modeling and metscape-based visualization methods for the analysis of large-scale metabolomics data.

    PubMed

    Basu, Sumanta; Duren, William; Evans, Charles R; Burant, Charles F; Michailidis, George; Karnovsky, Alla

    2017-05-15

    Recent technological advances in mass spectrometry, development of richer mass spectral libraries and data processing tools have enabled large scale metabolic profiling. Biological interpretation of metabolomics studies heavily relies on knowledge-based tools that contain information about metabolic pathways. Incomplete coverage of different areas of metabolism and lack of information about non-canonical connections between metabolites limits the scope of applications of such tools. Furthermore, the presence of a large number of unknown features, which cannot be readily identified, but nonetheless can represent bona fide compounds, also considerably complicates biological interpretation of the data. Leveraging recent developments in the statistical analysis of high-dimensional data, we developed a new Debiased Sparse Partial Correlation algorithm (DSPC) for estimating partial correlation networks and implemented it as a Java-based CorrelationCalculator program. We also introduce a new version of our previously developed tool Metscape that enables building and visualization of correlation networks. We demonstrate the utility of these tools by constructing biologically relevant networks and in aiding identification of unknown compounds. http://metscape.med.umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. James Webb Space Telescope Core 2 Test - Cryogenic Thermal Balance Test of the Observatorys Core Area Thermal Control Hardware

    NASA Technical Reports Server (NTRS)

    Cleveland, Paul; Parrish, Keith; Thomson, Shaun; Marsh, James; Comber, Brian

    2016-01-01

    The James Webb Space Telescope (JWST), successor to the Hubble Space Telescope, will be the largest astronomical telescope ever sent into space. To observe the very first light of the early universe, JWST requires a large deployed 6.5-meter primary mirror cryogenically cooled to less than 50 Kelvin. Three scientific instruments are further cooled via a large radiator system to less than 40 Kelvin. A fourth scientific instrument is cooled to less than 7 Kelvin using a combination pulse-tube Joule-Thomson mechanical cooler. Passive cryogenic cooling enables the large scale of the telescope which must be highly folded for launch on an Ariane 5 launch vehicle and deployed once on orbit during its journey to the second Earth-Sun Lagrange point. Passive cooling of the observatory is enabled by the deployment of a large tennis court sized five layer Sunshield combined with the use of a network of high efficiency radiators. A high purity aluminum heat strap system connects the three instrument's detector systems to the radiator systems to dissipate less than a single watt of parasitic and instrument dissipated heat. JWST's large scale features, while enabling passive cooling, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone of most space missions' thermal verification plans. This paper describes the JWST Core 2 Test, which is a cryogenic thermal balance test of a full size, high fidelity engineering model of the Observatory's 'Core' area thermal control hardware. The 'Core' area is the key mechanical and cryogenic interface area between all Observatory elements. The 'Core' area thermal control hardware allows for temperature transition of 300K to approximately 50 K by attenuating heat from the room temperature IEC (instrument electronics) and the Spacecraft Bus. Since the flight hardware is not available for test, the Core 2 test uses high fidelity and flight-like reproductions.

  5. Using complexity theory to develop a student-directed interprofessional learning activity for 1220 healthcare students.

    PubMed

    Jorm, Christine; Nisbet, Gillian; Roberts, Chris; Gordon, Christopher; Gentilcore, Stacey; Chen, Timothy F

    2016-08-08

    More and better interprofessional practice is predicated to be necessary to deliver good care to the patients of the future. However, universities struggle to create authentic learning activities that enable students to experience the dynamic interprofessional interactions common in healthcare and that can accommodate large interprofessional student cohorts. We investigated a large-scale mandatory interprofessional learning (IPL) activity for health professional students designed to promote social learning. A mixed methods research approach determined feasibility, acceptability and the extent to which student IPL outcomes were met. We developed an IPL activity founded in complexity theory to prepare students for future practice by engaging them in a self-directed (self-organised) learning activity with a diverse team, whose assessable products would be emergent creations. Complicated but authentic clinical cases (n = 12) were developed to challenge student teams (n = 5 or 6). Assessment consisted of a written management plan (academically marked) and a five-minute video (peer marked) designed to assess creative collaboration as well as provide evidence of integrated collective knowledge; the cohesive patient-centred management plan. All students (including the disciplines of diagnostic radiology, exercise physiology, medicine, nursing, occupational therapy, pharmacy, physiotherapy and speech pathology), completed all tasks successfully. Of the 26 % of students who completed the evaluation survey, 70 % agreed or strongly agreed that the IPL activity was worthwhile, and 87 % agreed or strongly agreed that their case study was relevant. Thematic analysis found overarching themes of engagement and collaboration-in-action suggesting that the IPL activity enabled students to achieve the intended learning objectives. Students recognised the contribution of others and described negotiation, collaboration and creation of new collective knowledge after working together on the complicated patient case studies. The novel video assessment was challenging to many students and contextual issues limited engagement for some disciplines. We demonstrated the feasibility and acceptability of a large scale IPL activity where design of cases, format and assessment tasks was founded in complexity theory. This theoretically based design enabled students to achieve complex IPL outcomes relevant to future practice. Future research could establish the psychometric properties of assessments of student performance in large-scale IPL events.

  6. Simulations of neutral wind shear effect on the equatorial ionosphere irregularities

    NASA Astrophysics Data System (ADS)

    Kim, J.; Chagelishvili, G.; Horton, W.

    2005-12-01

    We present numerical calculations of the large-scale electron density driven by the gradient drift instability in the daytime equatorial electrojet. Under two-fluid theory the linear analysis for kilometer scale waves lead to the result that all the perturbations are transformed to small scales through linear convection by shear and then damped by diffusion. The inclusion of the nonlinearity enables inverse energy cascade to provide energy to long scale. The feedback between velocity shear and nonlinearity keeps waves growing and leads to the turbulence. In strongly turbulent regime, the nonlinear states are saturated [1]. Since the convective nonlinearities are isotropic while the interactions of velocity shear with waves are anisotropic, the feedback do not necessarily enable waves to grow. The growth of waves are highly variable on k-space configuration [2]. Our simulations show that the directional relationship between vorticity of irregularities and shear are one of key factors. Thus during the transient period, the irregularities show the anisotropy of the vorticity power spectrum. We report the evolution of the power spectrum of the vorticity and density of irregularties and its anistropic nature as observed. The work was supported in part by the Department of NSF Grant ATM-0229863 and ISTC Grant G-553. C. Ronchi, R.N. Sudan, and D.T. Farley. Numerical simulations of large-scale plasma turbulece in teh day time equatorial electrojet. J. Geophys. Res., 96:21263--21279, 1991. G.D. Chagelishvili, R.G. Chanishvili, T.S. Hristov, and J.G. Lominadze. A turbulence model in unbounded smooth shear flows : The weak turbulence approach. JETP, 94(2):434--445, 2002.

  7. A Commercialization Roadmap for Carbon-Negative Energy Systems

    NASA Astrophysics Data System (ADS)

    Sanchez, D.

    2016-12-01

    The Intergovernmental Panel on Climate Change (IPCC) envisages the need for large-scale deployment of net-negative CO2 emissions technologies by mid-century to meet stringent climate mitigation goals and yield a net drawdown of atmospheric carbon. Yet there are few commercial deployments of BECCS outside of niche markets, creating uncertainty about commercialization pathways and sustainability impacts at scale. This uncertainty is exacerbated by the absence of a strong policy framework, such as high carbon prices and research coordination. Here, we propose a strategy for the potential commercial deployment of BECCS. This roadmap proceeds via three steps: 1) via capture and utilization of biogenic CO2 from existing bioenergy facilities, notably ethanol fermentation, 2) via thermochemical co-conversion of biomass and fossil fuels, particularly coal, and 3) via dedicated, large-scale BECCS. Although biochemical conversion is a proven first market for BECCS, this trajectory alone is unlikely to drive commercialization of BECCS at the gigatonne scale. In contrast to biochemical conversion, thermochemical conversion of coal and biomass enables large-scale production of fuels and electricity with a wide range of carbon intensities, process efficiencies and process scales. Aside from systems integration, primarily technical barriers are involved in large-scale biomass logistics, gasification and gas cleaning. Key uncertainties around large-scale BECCS deployment are not limited to commercialization pathways; rather, they include physical constraints on biomass cultivation or CO2 storage, as well as social barriers, including public acceptance of new technologies and conceptions of renewable and fossil energy, which co-conversion systems confound. Despite sustainability risks, this commercialization strategy presents a pathway where energy suppliers, manufacturers and governments could transition from laggards to leaders in climate change mitigation efforts.

  8. Hybrid reduced order modeling for assembly calculations

    DOE PAGES

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; ...

    2015-08-14

    While the accuracy of assembly calculations has greatly improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the usemore » of the reduced order modeling for a single physics code, such as a radiation transport calculation. This paper extends those works to coupled code systems as currently employed in assembly calculations. Finally, numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.« less

  9. An interactive web application for the dissemination of human systems immunology data.

    PubMed

    Speake, Cate; Presnell, Scott; Domico, Kelly; Zeitner, Brad; Bjork, Anna; Anderson, David; Mason, Michael J; Whalen, Elizabeth; Vargas, Olivia; Popov, Dimitry; Rinchai, Darawan; Jourde-Chiche, Noemie; Chiche, Laurent; Quinn, Charlie; Chaussabel, Damien

    2015-06-19

    Systems immunology approaches have proven invaluable in translational research settings. The current rate at which large-scale datasets are generated presents unique challenges and opportunities. Mining aggregates of these datasets could accelerate the pace of discovery, but new solutions are needed to integrate the heterogeneous data types with the contextual information that is necessary for interpretation. In addition, enabling tools and technologies facilitating investigators' interaction with large-scale datasets must be developed in order to promote insight and foster knowledge discovery. State of the art application programming was employed to develop an interactive web application for browsing and visualizing large and complex datasets. A collection of human immune transcriptome datasets were loaded alongside contextual information about the samples. We provide a resource enabling interactive query and navigation of transcriptome datasets relevant to human immunology research. Detailed information about studies and samples are displayed dynamically; if desired the associated data can be downloaded. Custom interactive visualizations of the data can be shared via email or social media. This application can be used to browse context-rich systems-scale data within and across systems immunology studies. This resource is publicly available online at [Gene Expression Browser Landing Page ( https://gxb.benaroyaresearch.org/dm3/landing.gsp )]. The source code is also available openly [Gene Expression Browser Source Code ( https://github.com/BenaroyaResearch/gxbrowser )]. We have developed a data browsing and visualization application capable of navigating increasingly large and complex datasets generated in the context of immunological studies. This intuitive tool ensures that, whether taken individually or as a whole, such datasets generated at great effort and expense remain interpretable and a ready source of insight for years to come.

  10. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  11. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  12. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, S.; Yokosawa, M.; Matsuyama, M.

    To study the practical application of a tritium separation process using Self-Developing Gas Chromatography (SDGC) using a Pd-Pt alloy, intermediate scale-up experiments (22 mm ID x 2 m length column) and the development of a computational simulation method have been conducted. In addition, intermediate scale production of Pd-Pt powder has been developed for the scale-up experiments.The following results were obtained: (1) a 50-fold scale-up from 3 mm to 22 mm causes no significant impact on the SDGC process; (2) the Pd-Pt alloy powder is applicable to a large size SDGC process; and (3) the simulation enables preparation of a conceptualmore » design of a SDGC process for tritium separation.« less

  14. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on first principals, partly pde-type, are available for several processes (but not for all), because measurement and modelling scale are compatible (-) the spatial model domain are hardly representative for larger spatial entities, including regions for which water resources management decisions are to be taken; straightforward upsizing is also limited by data availability and computational requirements. Meso scale (e.g. extent of a small to large catchment or region): (+) the spatial extent of the model domain has approximately the same extent as the regions for which water resources management decisions are to be taken. I.e., such models enable water resources quantification at the scale of most water management decisions; (+) data of some state conditions (e.g. vegetation cover, topography, river network and cross sections) are available; (+) data of some boundary fluxes (in particular surface runoff / channel flow) are directly measurable with mostly sufficient certainty; (+) equations, partly based on simple water budgeting, partly variants of pde-type equations, are available for most hydrological processes. This enables the construction of meso-scale distributed models reflecting the spatial heterogeneity of regions/landscapes; (-) process scale, measurement scale, and modelling scale differ from each other for a number of processes, e.g., such as runoff generation; (-) the process formulation (usually derived from micro-scale studies) cannot directly be transferred to the modelling domain. Upscaling procedures for this purpose are not readily and generally available. Macro scale (e.g. extent of a continent up to global): (+) the spatial extent of the model may cover the whole Earth. This enables an attractive global display of model results; (+) model results might be technically interchangeable or at least comparable with results from other global models, such as global climate models; (-) process scale, measurement scale, and modelling scale differ heavily from each other for all hydrological and associated processes; (-) the model domain and its results are not representative regions for which water resources management decisions are to be taken. (-) both state condition and boundary flux data are hardly available for the whole model domain. Water management data and discharge data from remote regions are particular incomplete / unavailable for this scale. This undermines the model's verifiability; (-) since process formulation and resulting modelling reliability at this scale is very limited, such models can hardly show any explanatory skills or prognostic power; (-) since both the entire model domain and the spatial sub-units cover large areas, model results represent values averaged over at least the spatial sub-unit's extent. In many cases, the applied time scale implies a long-term averaging in time, too. We emphasize the importance to be aware of the above mentioned strengths and weaknesses of those scale-specific models. (Many of the) results of the current global model studies do not reflect such limitations. In particular, we consider the averaging over large model entities in space and/or time inadequate. Many hydrological processes are of a non-linear nature, including threshold-type behaviour. Such features cannot be reflected by such large scale entities. The model results therefore can be of little or no use for water resources decisions and/or even misleading for public debates or decision making. Some rather newly developed sustainability concepts, e.g. "Planetary Boundaries" in which humanity may "continue to develop and thrive for generations to come" are based on such global-scale approaches and models. However, many of the major problems regarding sustainability on Earth, e.g. water scarcity, do not exhibit on a global but on a regional scale. While on a global scale water might look like being available in sufficient quantity and quality, there are many regions where water problems already have very harmful or even devastating effects. Therefore, it is the challenge to derive models and observation programmes for regional scales. In case a global display is desired future efforts should be directed towards the development of a global picture based on a mosaic of regional sound assessments, rather than "zooming into" the results of large-scale simulations. Still, a key question remains to be discussed, i.e. for which purpose models at this (global) scale can be used.

  15. Next-to-leading order Balitsky-Kovchegov equation with resummation

    DOE PAGES

    Lappi, T.; Mantysaari, H.

    2016-05-03

    Here, we solve the Balitsky-Kovchegov evolution equation at next-to-leading order accuracy including a resummation of large single and double transverse momentum logarithms to all orders. We numerically determine an optimal value for the constant under the large transverse momentum logarithm that enables including a maximal amount of the full NLO result in the resummation. When this value is used, the contribution from the α 2 s terms without large logarithms is found to be small at large saturation scales and at small dipoles. Close to initial conditions relevant for phenomenological applications, these fixed-order corrections are shown to be numerically important.

  16. Designing, Building, and Connecting Networks to Support Distributed Collaborative Empirical Writing Research

    ERIC Educational Resources Information Center

    Brunk-Chavez, Beth; Pigg, Stacey; Moore, Jessie; Rosinski, Paula; Grabill, Jeffrey T.

    2018-01-01

    To speak to diverse audiences about how people learn to write and how writing works inside and outside the academy, we must conduct research across geographical, institutional, and cultural contexts as well as research that enables comparison when appropriate. Large-scale empirical research is useful for both of these moves; however, we must…

  17. Researching Returns Emanating from Participation in Adult Education Courses: A Quantitative Approach

    ERIC Educational Resources Information Center

    Panitsides, Eugenia

    2013-01-01

    Throughout contemporary literature, participants in adult education courses have been reported to acquire knowledge and skills, develop understanding and enhance self-confidence, parameters that induce changes in their personal lives, while enabling them to play a more active role in their family, community or work. In this vein, a large-scale,…

  18. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  19. Sand waves in environmental flows: Insights gained by coupling large-eddy simulation with morphodynamics

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Khosronejad, Ali

    2016-02-01

    Sand waves arise in subaqueous and Aeolian environments as the result of the complex interaction between turbulent flows and mobile sand beds. They occur across a wide range of spatial scales, evolve at temporal scales much slower than the integral scale of the transporting turbulent flow, dominate river morphodynamics, undermine streambank stability and infrastructure during flooding, and sculpt terrestrial and extraterrestrial landscapes. In this paper, we present the vision for our work over the last ten years, which has sought to develop computational tools capable of simulating the coupled interactions of sand waves with turbulence across the broad range of relevant scales: from small-scale ripples in laboratory flumes to mega-dunes in large rivers. We review the computational advances that have enabled us to simulate the genesis and long-term evolution of arbitrarily large and complex sand dunes in turbulent flows using large-eddy simulation and summarize numerous novel physical insights derived from our simulations. Our findings explain the role of turbulent sweeps in the near-bed region as the primary mechanism for destabilizing the sand bed, show that the seeds of the emergent structure in dune fields lie in the heterogeneity of the turbulence and bed shear stress fluctuations over the initially flatbed, and elucidate how large dunes at equilibrium give rise to energetic coherent structures and modify the spectra of turbulence. We also discuss future challenges and our vision for advancing a data-driven simulation-based engineering science approach for site-specific simulations of river flooding.

  20. Unmanned Aircraft Systems Traffic Management (UTM) Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2017-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability

  1. Role of Concentrating Solar Power in Integrating Solar and Wind Energy: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, P.; Mehos, M.

    2015-06-03

    As wind and solar photovoltaics (PV) increase in penetration it is increasingly important to examine enabling technologies that can help integrate these resources at large scale. Concentrating solar power (CSP) when deployed with thermal energy storage (TES) can provide multiple services that can help integrate variable generation (VG) resources such as wind and PV. CSP with TES can provide firm, highly flexible capacity, reducing minimum generation constraints which limit penetration and results in curtailment. By acting as an enabling technology, CSP can complement PV and wind, substantially increasing their penetration in locations with adequate solar resource.

  2. Transaction-Based Building Controls Framework, Volume 1: Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somasundaram, Sriram; Pratt, Robert G.; Akyol, Bora A.

    This document proposes a framework concept to achieve the objectives of raising buildings’ efficiency and energy savings potential benefitting building owners and operators. We call it a transaction-based framework, wherein mutually-beneficial and cost-effective market-based transactions can be enabled between multiple players across different domains. Transaction-based building controls are one part of the transactional energy framework. While these controls realize benefits by enabling automatic, market-based intra-building efficiency optimizations, the transactional energy framework provides similar benefits using the same market -based structure, yet on a larger scale and beyond just buildings, to the society at large.

  3. Selection and Manufacturing of Membrane Materials for Solar Sails

    NASA Technical Reports Server (NTRS)

    Bryant, Robert G.; Seaman, Shane T.; Wilkie, W. Keats; Miyaucchi, Masahiko; Working, Dennis C.

    2013-01-01

    Commercial metallized polyimide or polyester films and hand-assembly techniques are acceptable for small solar sail technology demonstrations, although scaling this approach to large sail areas is impractical. Opportunities now exist to use new polymeric materials specifically designed for solar sailing applications, and take advantage of integrated sail manufacturing to enable large-scale solar sail construction. This approach has, in part, been demonstrated on the JAXA IKAROS solar sail demonstrator, and NASA Langley Research Center is now developing capabilities to produce ultrathin membranes for solar sails by integrating resin synthesis with film forming and sail manufacturing processes. This paper will discuss the selection and development of polymer material systems for space, and these new processes for producing ultrathin high-performance solar sail membrane films.

  4. Shaping carbon nanostructures by controlling the synthesis process

    NASA Astrophysics Data System (ADS)

    Merkulov, Vladimir I.; Guillorn, Michael A.; Lowndes, Douglas H.; Simpson, Michael L.; Voelkl, Edgar

    2001-08-01

    The ability to control the nanoscale shape of nanostructures in a large-scale synthesis process is an essential and elusive goal of nanotechnology research. Here, we report significant progress toward that goal. We have developed a technique that enables controlled synthesis of nanoscale carbon structures with conical and cylinder-on-cone shapes and provides the capability to dynamically change the nanostructure shape during the synthesis process. In addition, we present a phenomenological model that explains the formation of these nanostructures and provides insight into methods for precisely engineering their shape. Since the growth process we report is highly deterministic in allowing large-scale synthesis of precisely engineered nanoscale components at defined locations, our approach provides an important tool for a practical nanotechnology.

  5. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  6. Concepts for on-board satellite image registration. Volume 3: Impact of VLSI/VHSIC on satellite on-board signal processing

    NASA Technical Reports Server (NTRS)

    Aanstoos, J. V.; Snyder, W. E.

    1981-01-01

    Anticipated major advances in integrated circuit technology in the near future are described as well as their impact on satellite onboard signal processing systems. Dramatic improvements in chip density, speed, power consumption, and system reliability are expected from very large scale integration. Improvements are expected from very large scale integration enable more intelligence to be placed on remote sensing platforms in space, meeting the goals of NASA's information adaptive system concept, a major component of the NASA End-to-End Data System program. A forecast of VLSI technological advances is presented, including a description of the Defense Department's very high speed integrated circuit program, a seven-year research and development effort.

  7. Coherent nonhelical shear dynamos driven by magnetic fluctuations at low Reynolds numbers

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-10-28

    Nonhelical shear dynamos are studied with a particular focus on the possibility of coherent dynamo action. The primary results—serving as a follow up to the results of Squire & Bhattacharjee—pertain to the "magnetic shear-current effect" as a viable mechanism to drive large-scale magnetic field generation. This effect raises the interesting possibility that the saturated state of the small-scale dynamo could drive large-scale dynamo action, and is likely to be important in the unstratified regions of accretion disk turbulence. In this paper, the effect is studied at low Reynolds numbers, removing the complications of small-scale dynamo excitation and aiding analysis bymore » enabling the use of quasi-linear statistical simulation methods. In addition to the magnetically driven dynamo, new results on the kinematic nonhelical shear dynamo are presented. Furthermore, these illustrate the relationship between coherent and incoherent driving in such dynamos, demonstrating the importance of rotation in determining the relative dominance of each mechanism.« less

  8. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function

    NASA Astrophysics Data System (ADS)

    Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.

    2016-03-01

    Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.

  9. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    PubMed Central

    Jensen, Tue V.; Pinson, Pierre

    2017-01-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation. PMID:29182600

  10. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    PubMed

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  11. COHERENT NONHELICAL SHEAR DYNAMOS DRIVEN BY MAGNETIC FLUCTUATIONS AT LOW REYNOLDS NUMBERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Squire, J.; Bhattacharjee, A., E-mail: jsquire@caltech.edu

    2015-11-01

    Nonhelical shear dynamos are studied with a particular focus on the possibility of coherent dynamo action. The primary results—serving as a follow up to the results of Squire and Bhattacharjee—pertain to the “magnetic shear-current effect” as a viable mechanism to drive large-scale magnetic field generation. This effect raises the interesting possibility that the saturated state of the small-scale dynamo could drive large-scale dynamo action, and is likely to be important in the unstratified regions of accretion disk turbulence. In this paper, the effect is studied at low Reynolds numbers, removing the complications of small-scale dynamo excitation and aiding analysis bymore » enabling the use of quasi-linear statistical simulation methods. In addition to the magnetically driven dynamo, new results on the kinematic nonhelical shear dynamo are presented. These illustrate the relationship between coherent and incoherent driving in such dynamos, demonstrating the importance of rotation in determining the relative dominance of each mechanism.« less

  12. Barriers Inhibiting Inquiry-Based Science Teaching and Potential Solutions: Perceptions of Positively Inclined Early Adopters

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Michael; Danaia, Lena; McKinnon, David H.

    2017-07-01

    In recent years, calls for the adoption of inquiry-based pedagogies in the science classroom have formed a part of the recommendations for large-scale high school science reforms. However, these pedagogies have been problematic to implement at scale. This research explores the perceptions of 34 positively inclined early-adopter teachers in relation to their implementation of inquiry-based pedagogies. The teachers were part of a large-scale Australian high school intervention project based around astronomy. In a series of semi-structured interviews, the teachers identified a number of common barriers that prevented them from implementing inquiry-based approaches. The most important barriers identified include the extreme time restrictions on all scales, the poverty of their common professional development experiences, their lack of good models and definitions for what inquiry-based teaching actually is, and the lack of good resources enabling the capacity for change. Implications for expectations of teachers and their professional learning during educational reform and curriculum change are discussed.

  13. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    NASA Astrophysics Data System (ADS)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  14. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  15. Multilength Scale Patterning of Functional Layers by Roll-to-Roll Ultraviolet-Light-Assisted Nanoimprint Lithography.

    PubMed

    Leitgeb, Markus; Nees, Dieter; Ruttloff, Stephan; Palfinger, Ursula; Götz, Johannes; Liska, Robert; Belegratis, Maria R; Stadlober, Barbara

    2016-05-24

    Top-down fabrication of nanostructures with high throughput is still a challenge. We demonstrate the fast (>10 m/min) and continuous fabrication of multilength scale structures by roll-to-roll UV-nanoimprint lithography on a 250 mm wide web. The large-area nanopatterning is enabled by a multicomponent UV-curable resist system (JRcure) with viscous, mechanical, and surface properties that are tunable over a wide range to either allow for usage as polymer stamp material or as imprint resist. The adjustable elasticity and surface chemistry of the resist system enable multistep self-replication of structured resist layers. Decisive for defect-free UV-nanoimprinting in roll-to-roll is the minimization of the surface energies of stamp and resist, and the stepwise reduction of the stiffness from one layer to the next is essential for optimizing the reproduction fidelity especially for nanoscale features. Accordingly, we demonstrate the continuous replication of 3D nanostructures and the high-throughput fabrication of multilength scale resist structures resulting in flexible polyethylenetherephtalate film rolls with superhydrophobic properties. Moreover, a water-soluble UV-imprint resist (JRlift) is introduced that enables residue-free nanoimprinting in roll-to-roll. Thereby we could demonstrate high-throughput fabrication of metallic patterns with only 200 nm line width.

  16. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    NASA Astrophysics Data System (ADS)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  17. Cells as advanced therapeutics: State-of-the-art, challenges, and opportunities in large scale biomanufacturing of high-quality cells for adoptive immunotherapies.

    PubMed

    Dwarshuis, Nate J; Parratt, Kirsten; Santiago-Miranda, Adriana; Roy, Krishnendu

    2017-05-15

    Therapeutic cells hold tremendous promise in treating currently incurable, chronic diseases since they perform multiple, integrated, complex functions in vivo compared to traditional small-molecule drugs or biologics. However, they also pose significant challenges as therapeutic products because (a) their complex mechanisms of actions are difficult to understand and (b) low-cost bioprocesses for large-scale, reproducible manufacturing of cells have yet to be developed. Immunotherapies using T cells and dendritic cells (DCs) have already shown great promise in treating several types of cancers, and human mesenchymal stromal cells (hMSCs) are now extensively being evaluated in clinical trials as immune-modulatory cells. Despite these exciting developments, the full potential of cell-based therapeutics cannot be realized unless new engineering technologies enable cost-effective, consistent manufacturing of high-quality therapeutic cells at large-scale. Here we review cell-based immunotherapy concepts focused on the state-of-the-art in manufacturing processes including cell sourcing, isolation, expansion, modification, quality control (QC), and culture media requirements. We also offer insights into how current technologies could be significantly improved and augmented by new technologies, and how disciplines must converge to meet the long-term needs for large-scale production of cell-based immunotherapies. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE PAGES

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara; ...

    2018-02-20

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  19. Enablers and barriers to large-scale uptake of improved solid fuel stoves: a systematic review.

    PubMed

    Rehfuess, Eva A; Puzzolo, Elisa; Stanistreet, Debbi; Pope, Daniel; Bruce, Nigel G

    2014-02-01

    Globally, 2.8 billion people rely on household solid fuels. Reducing the resulting adverse health, environmental, and development consequences will involve transitioning through a mix of clean fuels and improved solid fuel stoves (IS) of demonstrable effectiveness. To date, achieving uptake of IS has presented significant challenges. We performed a systematic review of factors that enable or limit large-scale uptake of IS in low- and middle-income countries. We conducted systematic searches through multidisciplinary databases, specialist websites, and consulting experts. The review drew on qualitative, quantitative, and case studies and used standardized methods for screening, data extraction, critical appraisal, and synthesis. We summarized our findings as "factors" relating to one of seven domains-fuel and technology characteristics; household and setting characteristics; knowledge and perceptions; finance, tax, and subsidy aspects; market development; regulation, legislation, and standards; programmatic and policy mechanisms-and also recorded issues that impacted equity. We identified 31 factors influencing uptake from 57 studies conducted in Asia, Africa, and Latin America. All domains matter. Although factors such as offering technologies that meet household needs and save fuel, user training and support, effective financing, and facilitative government action appear to be critical, none guarantee success: All factors can be influential, depending on context. The nature of available evidence did not permit further prioritization. Achieving adoption and sustained use of IS at a large scale requires that all factors, spanning household/community and program/societal levels, be assessed and supported by policy. We propose a planning tool that would aid this process and suggest further research to incorporate an evaluation of effectiveness.

  20. Obtaining high-resolution stage forecasts by coupling large-scale hydrologic models with sensor data

    NASA Astrophysics Data System (ADS)

    Fries, K. J.; Kerkez, B.

    2017-12-01

    We investigate how "big" quantities of distributed sensor data can be coupled with a large-scale hydrologic model, in particular the National Water Model (NWM), to obtain hyper-resolution forecasts. The recent launch of the NWM provides a great example of how growing computational capacity is enabling a new generation of massive hydrologic models. While the NWM spans an unprecedented spatial extent, there remain many questions about how to improve forecast at the street-level, the resolution at which many stakeholders make critical decisions. Further, the NWM runs on supercomputers, so water managers who may have access to their own high-resolution measurements may not readily be able to assimilate them into the model. To that end, we ask the question: how can the advances of the large-scale NWM be coupled with new local observations to enable hyper-resolution hydrologic forecasts? A methodology is proposed whereby the flow forecasts of the NWM are directly mapped to high-resolution stream levels using Dynamical System Identification. We apply the methodology across a sensor network of 182 gages in Iowa. Of these sites, approximately one third have shown to perform well in high-resolution flood forecasting when coupled with the outputs of the NWM. The quality of these forecasts is characterized using Principal Component Analysis and Random Forests to identify where the NWM may benefit from new sources of local observations. We also discuss how this approach can help municipalities identify where they should place low-cost sensors to most benefit from flood forecasts of the NWM.

  1. Eigenvalue Solvers for Modeling Nuclear Reactors on Leadership Class Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaybaugh, R. N.; Ramirez-Zweiger, M.; Pandya, Tara

    In this paper, three complementary methods have been implemented in the code Denovo that accelerate neutral particle transport calculations with methods that use leadership-class computers fully and effectively: a multigroup block (MG) Krylov solver, a Rayleigh quotient iteration (RQI) eigenvalue solver, and a multigrid in energy (MGE) preconditioner. The MG Krylov solver converges more quickly than Gauss Seidel and enables energy decomposition such that Denovo can scale to hundreds of thousands of cores. RQI should converge in fewer iterations than power iteration (PI) for large and challenging problems. RQI creates shifted systems that would not be tractable without the MGmore » Krylov solver. It also creates ill-conditioned matrices. The MGE preconditioner reduces iteration count significantly when used with RQI and takes advantage of the new energy decomposition such that it can scale efficiently. Each individual method has been described before, but this is the first time they have been demonstrated to work together effectively. The combination of solvers enables the RQI eigenvalue solver to work better than the other available solvers for large reactors problems on leadership-class machines. Using these methods together, RQI converged in fewer iterations and in less time than PI for a full pressurized water reactor core. These solvers also performed better than an Arnoldi eigenvalue solver for a reactor benchmark problem when energy decomposition is needed. The MG Krylov, MGE preconditioner, and RQI solver combination also scales well in energy. Finally, this solver set is a strong choice for very large and challenging problems.« less

  2. Multi-photon microfabrication of three-dimensional capillary-scale vascular networks

    NASA Astrophysics Data System (ADS)

    Skylar-Scott, Mark A.; Liu, Man-Chi; Wu, Yuelong; Yanik, Mehmet Fatih

    2017-02-01

    Biomimetic models of microvasculature could enable assays of complex cellular behavior at the capillary-level, and enable efficient nutrient perfusion for the maintenance of tissues. However, existing three-dimensional printing methods for generating perfusable microvasculature with have insufficient resolution to recapitulate the microscale geometry of capillaries. Here, we present a collection of multiphoton microfabrication methods that enable the production of precise, three-dimensional, branched microvascular networks in collagen. When endothelial cells are added to the channels, they form perfusable lumens with diameters as small as 10 μm. Using a similar photochemistry, we also demonstrate the micropatterning of proteins embedded in microfabricated collagen scaffolds, producing hybrid scaffolds with both defined microarchitecture with integrated gradients of chemical cues. We provide examples for how these hybrid microfabricated scaffolds could be used in angiogenesis and cell homing assays. Finally, we describe a new method for increasing the micropatterning speed by synchronous laser and stage scanning. Using these technologies, we are working towards large-scale (>1 cm), high resolution ( 1 μm) scaffolds with both microarchitecture and embedded protein cues, with applications in three-dimensional assays of cellular behavior.

  3. Xray: N-dimensional, labeled arrays for analyzing physical datasets in Python

    NASA Astrophysics Data System (ADS)

    Hoyer, S.

    2015-12-01

    Efficient analysis of geophysical datasets requires tools that both preserve and utilize metadata, and that transparently scale to process large datas. Xray is such a tool, in the form of an open source Python library for analyzing the labeled, multi-dimensional array (tensor) datasets that are ubiquitous in the Earth sciences. Xray's approach pairs Python data structures based on the data model of the netCDF file format with the proven design and user interface of pandas, the popular Python data analysis library for labeled tabular data. On top of the NumPy array, xray adds labeled dimensions (e.g., "time") and coordinate values (e.g., "2015-04-10"), which it uses to enable a host of operations powered by these labels: selection, aggregation, alignment, broadcasting, split-apply-combine, interoperability with pandas and serialization to netCDF/HDF5. Many of these operations are enabled by xray's tight integration with pandas. Finally, to allow for easy parallelism and to enable its labeled data operations to scale to datasets that does not fit into memory, xray integrates with the parallel processing library dask.

  4. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  5. Selecting habitat to survive: the impact of road density on survival in a large carnivore.

    PubMed

    Basille, Mathieu; Van Moorter, Bram; Herfindal, Ivar; Martin, Jodie; Linnell, John D C; Odden, John; Andersen, Reidar; Gaillard, Jean-Michel

    2013-01-01

    Habitat selection studies generally assume that animals select habitat and food resources at multiple scales to maximise their fitness. However, animals sometimes prefer habitats of apparently low quality, especially when considering the costs associated with spatially heterogeneous human disturbance. We used spatial variation in human disturbance, and its consequences on lynx survival, a direct fitness component, to test the Hierarchical Habitat Selection hypothesis from a population of Eurasian lynx Lynx lynx in southern Norway. Data from 46 lynx monitored with telemetry indicated that a high proportion of forest strongly reduced the risk of mortality from legal hunting at the home range scale, while increasing road density strongly increased such risk at the finer scale within the home range. We found hierarchical effects of the impact of human disturbance, with a higher road density at a large scale reinforcing its negative impact at a fine scale. Conversely, we demonstrated that lynx shifted their habitat selection to avoid areas with the highest road densities within their home ranges, thus supporting a compensatory mechanism at fine scale enabling lynx to mitigate the impact of large-scale disturbance. Human impact, positively associated with high road accessibility, was thus a stronger driver of lynx space use at a finer scale, with home range characteristics nevertheless constraining habitat selection. Our study demonstrates the truly hierarchical nature of habitat selection, which aims at maximising fitness by selecting against limiting factors at multiple spatial scales, and indicates that scale-specific heterogeneity of the environment is driving individual spatial behaviour, by means of trade-offs across spatial scales.

  6. Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwabe, Ulrich; Fishman, Oleg

    2015-03-20

    The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.

  7. Identification of Phosphorylated Proteins on a Global Scale.

    PubMed

    Iliuk, Anton

    2018-05-31

    Liquid chromatography (LC) coupled with tandem mass spectrometry (MS/MS) has enabled researchers to analyze complex biological samples with unprecedented depth. It facilitates the identification and quantification of modifications within thousands of proteins in a single large-scale proteomic experiment. Analysis of phosphorylation, one of the most common and important post-translational modifications, has particularly benefited from such progress in the field. Here, detailed protocols are provided for a few well-regarded, common sample preparation methods for an effective phosphoproteomic experiment. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.

  8. Solar Energy Technologies Office FY 2017 Budget At-A-Glance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2016-03-01

    The Solar Energy Technologies Office supports the SunShot Initiative goal to make solar energy technologies cost competitive with conventional energy sources by 2020. Reducing the total installed cost for utility-scale solar electricity by approximately 75% (2010 baseline) to roughly $0.06 per kWh without subsidies will enable rapid, large-scale adoption of solar electricity across the United States. This investment will help re-establish American technological and market leadership in solar energy, reduce environmental impacts of electricity generation, and strengthen U.S. economic competitiveness.

  9. Data partitioning enables the use of standard SOAP Web Services in genome-scale workflows.

    PubMed

    Sztromwasser, Pawel; Puntervoll, Pål; Petersen, Kjell

    2011-07-26

    Biological databases and computational biology tools are provided by research groups around the world, and made accessible on the Web. Combining these resources is a common practice in bioinformatics, but integration of heterogeneous and often distributed tools and datasets can be challenging. To date, this challenge has been commonly addressed in a pragmatic way, by tedious and error-prone scripting. Recently however a more reliable technique has been identified and proposed as the platform that would tie together bioinformatics resources, namely Web Services. In the last decade the Web Services have spread wide in bioinformatics, and earned the title of recommended technology. However, in the era of high-throughput experimentation, a major concern regarding Web Services is their ability to handle large-scale data traffic. We propose a stream-like communication pattern for standard SOAP Web Services, that enables efficient flow of large data traffic between a workflow orchestrator and Web Services. We evaluated the data-partitioning strategy by comparing it with typical communication patterns on an example pipeline for genomic sequence annotation. The results show that data-partitioning lowers resource demands of services and increases their throughput, which in consequence allows to execute in-silico experiments on genome-scale, using standard SOAP Web Services and workflows. As a proof-of-principle we annotated an RNA-seq dataset using a plain BPEL workflow engine.

  10. Ontology-Driven Provenance Management in eScience: An Application in Parasite Research

    NASA Astrophysics Data System (ADS)

    Sahoo, Satya S.; Weatherly, D. Brent; Mutharaju, Raghava; Anantharam, Pramod; Sheth, Amit; Tarleton, Rick L.

    Provenance, from the French word "provenir", describes the lineage or history of a data entity. Provenance is critical information in scientific applications to verify experiment process, validate data quality and associate trust values with scientific results. Current industrial scale eScience projects require an end-to-end provenance management infrastructure. This infrastructure needs to be underpinned by formal semantics to enable analysis of large scale provenance information by software applications. Further, effective analysis of provenance information requires well-defined query mechanisms to support complex queries over large datasets. This paper introduces an ontology-driven provenance management infrastructure for biology experiment data, as part of the Semantic Problem Solving Environment (SPSE) for Trypanosoma cruzi (T.cruzi). This provenance infrastructure, called T.cruzi Provenance Management System (PMS), is underpinned by (a) a domain-specific provenance ontology called Parasite Experiment ontology, (b) specialized query operators for provenance analysis, and (c) a provenance query engine. The query engine uses a novel optimization technique based on materialized views called materialized provenance views (MPV) to scale with increasing data size and query complexity. This comprehensive ontology-driven provenance infrastructure not only allows effective tracking and management of ongoing experiments in the Tarleton Research Group at the Center for Tropical and Emerging Global Diseases (CTEGD), but also enables researchers to retrieve the complete provenance information of scientific results for publication in literature.

  11. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  12. Method for forming a nano-textured substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Sangmoo; Hu, Liangbing; Cui, Yi

    A method for forming a nano-textured surface on a substrate is disclosed. An illustrative embodiment of the present invention comprises dispensing of a nanoparticle ink of nanoparticles and solvent onto the surface of a substrate, distributing the ink to form substantially uniform, liquid nascent layer of the ink, and enabling the solvent to evaporate from the nanoparticle ink thereby inducing the nanoparticles to assemble into an texture layer. Methods in accordance with the present invention enable rapid formation of large-area substrates having a nano-textured surface. Embodiments of the present invention are well suited for texturing substrates using high-speed, large scale,more » roll-to-roll coating equipment, such as that used in office product, film coating, and flexible packaging applications. Further, embodiments of the present invention are well suited for use with rigid or flexible substrates.« less

  13. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  14. Weighing trees with lasers: advances, challenges and opportunities

    PubMed Central

    Boni Vicari, M.; Burt, A.; Calders, K.; Lewis, S. L.; Raumonen, P.; Wilkes, P.

    2018-01-01

    Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods. PMID:29503726

  15. Development and Evaluation of Season-ahead Precipitation and Streamflow Predictions for Sectoral Management in Western Ethiopia

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Alexander, S.; WU, S.

    2017-12-01

    Skillful season-ahead predictions conditioned on local and large-scale hydro-climate variables can provide valuable knowledge to farmers and reservoir operators, enabling informed water resource allocation and management decisions. In Ethiopia, the potential for advancing agriculture and hydropower management, and subsequently economic growth, is substantial, yet evidence suggests a weak adoption of prediction information by sectoral audiences. To address common critiques, including skill, scale, and uncertainty, probabilistic forecasts are developed at various scales - temporally and spatially - for the Finchaa hydropower dam and the Koga agricultural scheme in an attempt to promote uptake and application. Significant prediction skill is evident across scales, particularly for statistical models. This raises questions regarding other potential barriers to forecast utilization at community scales, which are also addressed.

  16. Ocean Research Enabled by Underwater Gliders.

    PubMed

    Rudnick, Daniel L

    2016-01-01

    Underwater gliders are autonomous underwater vehicles that profile vertically by changing their buoyancy and use wings to move horizontally. Gliders are useful for sustained observation at relatively fine horizontal scales, especially to connect the coastal and open ocean. In this review, research topics are grouped by time and length scales. Large-scale topics addressed include the eastern and western boundary currents and the regional effects of climate variability. The accessibility of horizontal length scales of order 1 km allows investigation of mesoscale and submesoscale features such as fronts and eddies. Because the submesoscales dominate vertical fluxes in the ocean, gliders have found application in studies of biogeochemical processes. At the finest scales, gliders have been used to measure internal waves and turbulent dissipation. The review summarizes gliders' achievements to date and assesses their future in ocean observation.

  17. Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping

    NASA Astrophysics Data System (ADS)

    Tsagkrasoulis, Dimosthenis; Hysi, Pirro; Spector, Tim; Montana, Giovanni

    2017-04-01

    The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).

  18. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  19. Contrasting styles of large-scale displacement of unconsolidated sand: examples from the early Jurassic Navajo Sandstone on the Colorado Plateau, USA

    NASA Astrophysics Data System (ADS)

    Bryant, Gerald

    2015-04-01

    Large-scale soft-sediment deformation features in the Navajo Sandstone have been a topic of interest for nearly 40 years, ever since they were first explored as a criterion for discriminating between marine and continental processes in the depositional environment. For much of this time, evidence for large-scale sediment displacements was commonly attributed to processes of mass wasting. That is, gravity-driven movements of surficial sand. These slope failures were attributed to the inherent susceptibility of dune sand responding to environmental triggers such as earthquakes, floods, impacts, and the differential loading associated with dune topography. During the last decade, a new wave of research is focusing on the event significance of deformation features in more detail, revealing a broad diversity of large-scale deformation morphologies. This research has led to a better appreciation of subsurface dynamics in the early Jurassic deformation events recorded in the Navajo Sandstone, including the important role of intrastratal sediment flow. This report documents two illustrative examples of large-scale sediment displacements represented in extensive outcrops of the Navajo Sandstone along the Utah/Arizona border. Architectural relationships in these outcrops provide definitive constraints that enable the recognition of a large-scale sediment outflow, at one location, and an equally large-scale subsurface flow at the other. At both sites, evidence for associated processes of liquefaction appear at depths of at least 40 m below the original depositional surface, which is nearly an order of magnitude greater than has commonly been reported from modern settings. The surficial, mass flow feature displays attributes that are consistent with much smaller-scale sediment eruptions (sand volcanoes) that are often documented from modern earthquake zones, including the development of hydraulic pressure from localized, subsurface liquefaction and the subsequent escape of fluidized sand toward the unconfined conditions of the surface. The origin of the forces that produced the lateral, subsurface movement of a large body of sand at the other site is not readily apparent. The various constraints on modeling the generation of the lateral force required to produce the observed displacement are considered here, along with photodocumentation of key outcrop relationships.

  20. Item Selection and Pre-equating with Empirical Item Characteristic Curves.

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    An empirical item characteristic curve shows the probability of a correct response as a function of the student's total test score. These curves can be estimated from large-scale pretest data. They enable test developers to select items that discriminate well in the score region where decisions are made. A similar set of curves can be used to…

  1. Visualizing Structure and Dynamics of Disaccharide Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, J. F.; Beckham, G. T.; Himmel, M. E.

    2012-01-01

    We examine the effect of several solvent models on the conformational properties and dynamics of disaccharides such as cellobiose and lactose. Significant variation in timescale for large scale conformational transformations are observed. Molecular dynamics simulation provides enough detail to enable insight through visualization of multidimensional data sets. We present a new way to visualize conformational space for disaccharides with Ramachandran plots.

  2. Using Monoclonal Antibodies to Prevent Mucosal Transmission of Epidemic Infectious Diseases

    PubMed Central

    Zeitlin, Larry; Cone, Richard A.

    1999-01-01

    Passive immunization with antibodies has been shown to prevent a wide variety of diseases. Recent advances in monoclonal antibody technology are enabling the development of new methods for passive immunization of mucosal surfaces. Human monoclonal antibodies, produced rapidly, inexpensively, and in large quantities, may help prevent respiratory, diarrheal, and sexually transmitted diseases on a public health scale. PMID:10081672

  3. Large-scale optimal control of interconnected natural gas and electrical transmission systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    2016-04-01

    We present a detailed optimal control model that captures spatiotemporal interactions between gas and electric transmission networks. We use the model to study flexibility and economic opportunities provided by coordination. A large-scale case study in the Illinois system reveals that coordination can enable the delivery of significantly larger amounts of natural gas to the power grid. In particular, under a coordinated setting, gas-fired generators act as distributed demand response resources that can be controlled by the gas pipeline operator. This enables more efficient control of pressures and flows in space and time and overcomes delivery bottlenecks. We demonstrate that themore » additional flexibility not only can benefit the gas operator but can also lead to more efficient power grid operations and results in increased revenue for gas-fired power plants. We also use the optimal control model to analyze computational issues arising in these complex models. We demonstrate that the interconnected Illinois system with full physical resolution gives rise to a highly nonlinear optimal control problem with 4400 differential and algebraic equations and 1040 controls that can be solved with a state-of-the-art sparse optimization solver. (C) 2016 Elsevier Ltd. All rights reserved.« less

  4. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU

    PubMed Central

    Xia, Yong; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations. PMID:26581957

  5. Parallel Optimization of 3D Cardiac Electrophysiological Model Using GPU.

    PubMed

    Xia, Yong; Wang, Kuanquan; Zhang, Henggui

    2015-01-01

    Large-scale 3D virtual heart model simulations are highly demanding in computational resources. This imposes a big challenge to the traditional computation resources based on CPU environment, which already cannot meet the requirement of the whole computation demands or are not easily available due to expensive costs. GPU as a parallel computing environment therefore provides an alternative to solve the large-scale computational problems of whole heart modeling. In this study, using a 3D sheep atrial model as a test bed, we developed a GPU-based simulation algorithm to simulate the conduction of electrical excitation waves in the 3D atria. In the GPU algorithm, a multicellular tissue model was split into two components: one is the single cell model (ordinary differential equation) and the other is the diffusion term of the monodomain model (partial differential equation). Such a decoupling enabled realization of the GPU parallel algorithm. Furthermore, several optimization strategies were proposed based on the features of the virtual heart model, which enabled a 200-fold speedup as compared to a CPU implementation. In conclusion, an optimized GPU algorithm has been developed that provides an economic and powerful platform for 3D whole heart simulations.

  6. The Cell Collective: Toward an open and collaborative approach to systems biology

    PubMed Central

    2012-01-01

    Background Despite decades of new discoveries in biomedical research, the overwhelming complexity of cells has been a significant barrier to a fundamental understanding of how cells work as a whole. As such, the holistic study of biochemical pathways requires computer modeling. Due to the complexity of cells, it is not feasible for one person or group to model the cell in its entirety. Results The Cell Collective is a platform that allows the world-wide scientific community to create these models collectively. Its interface enables users to build and use models without specifying any mathematical equations or computer code - addressing one of the major hurdles with computational research. In addition, this platform allows scientists to simulate and analyze the models in real-time on the web, including the ability to simulate loss/gain of function and test what-if scenarios in real time. Conclusions The Cell Collective is a web-based platform that enables laboratory scientists from across the globe to collaboratively build large-scale models of various biological processes, and simulate/analyze them in real time. In this manuscript, we show examples of its application to a large-scale model of signal transduction. PMID:22871178

  7. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  8. Street Level Hydrology: An Urban Application of the WRF-Hydro Framework in Denver, Colorado

    NASA Astrophysics Data System (ADS)

    Read, L.; Hogue, T. S.; Salas, F. R.; Gochis, D.

    2015-12-01

    Urban flood modeling at the watershed scale carries unique challenges in routing complexity, data resolution, social and political issues, and land surface - infrastructure interactions. The ability to accurately trace and predict the flow of water through the urban landscape enables better emergency response management, floodplain mapping, and data for future urban infrastructure planning and development. These services are of growing importance as urban population is expected to continue increasing by 1.84% per year for the next 25 years, increasing the vulnerability of urban regions to damages and loss of life from floods. Although a range of watershed-scale models have been applied in specific urban areas to examine these issues, there is a trend towards national scale hydrologic modeling enabled by supercomputing resources to understand larger system-wide hydrologic impacts and feedbacks. As such it is important to address how urban landscapes can be represented in large scale modeling processes. The current project investigates how coupling terrain and infrastructure routing can improve flow prediction and flooding events over the urban landscape. We utilize the WRF-Hydro modeling framework and a high-resolution terrain routing grid with the goal of compiling standard data needs necessary for fine scale urban modeling and dynamic flood forecasting in the urban setting. The city of Denver is selected as a case study, as it has experienced several large flooding events in the last five years and has an urban annual population growth rate of 1.5%, one of the highest in the U.S. Our work highlights the hydro-informatic challenges associated with linking channel networks and drainage infrastructure in an urban area using the WRF-Hydro modeling framework and high resolution urban models for short-term flood prediction.

  9. Flexible Organic Electronics for Use in Neural Sensing

    PubMed Central

    Bink, Hank; Lai, Yuming; Saudari, Sangameshwar R.; Helfer, Brian; Viventi, Jonathan; Van der Spiegel, Jan; Litt, Brian; Kagan, Cherie

    2016-01-01

    Recent research in brain-machine interfaces and devices to treat neurological disease indicate that important network activity exists at temporal and spatial scales beyond the resolution of existing implantable devices. High density, active electrode arrays hold great promise in enabling high-resolution interface with the brain to access and influence this network activity. Integrating flexible electronic devices directly at the neural interface can enable thousands of multiplexed electrodes to be connected using many fewer wires. Active electrode arrays have been demonstrated using flexible, inorganic silicon transistors. However, these approaches may be limited in their ability to be cost-effectively scaled to large array sizes (8×8 cm). Here we show amplifiers built using flexible organic transistors with sufficient performance for neural signal recording. We also demonstrate a pathway for a fully integrated, amplified and multiplexed electrode array built from these devices. PMID:22255558

  10. Systems Proteomics for Translational Network Medicine

    PubMed Central

    Arrell, D. Kent; Terzic, Andre

    2012-01-01

    Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016

  11. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data.

    PubMed

    Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon

    2015-01-01

    Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  12. A forward-advancing wave expansion method for numerical solution of large-scale sound propagation problems

    NASA Astrophysics Data System (ADS)

    Rolla, L. Barrera; Rice, H. J.

    2006-09-01

    In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.

  13. The SCALE-UP Project

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    2015-03-01

    The Student Centered Active Learning Environment with Upside-down Pedagogies (SCALE-UP) project was developed nearly 20 years ago as an economical way to provide collaborative, interactive instruction even for large enrollment classes. Nearly all research-based pedagogies have been designed with fairly high faculty-student ratios. The economics of introductory courses at large universities often precludes that situation, so SCALE-UP was created as a way to facilitate highly collaborative active learning with large numbers of students served by only a few faculty and assistants. It enables those students to learn and succeed not only in acquiring content, but also to practice important 21st century skills like problem solving, communication, and teamsmanship. The approach was initially targeted at undergraduate science and engineering students taking introductory physics courses in large enrollment sections. It has since expanded to multiple content areas, including chemistry, math, engineering, biology, business, nursing, and even the humanities. Class sizes range from 24 to over 600. Data collected from multiple sites around the world indicates highly successful implementation at more than 250 institutions. NSF support was critical for initial development and dissemination efforts. Generously supported by NSF (9752313, 9981107) and FIPSE (P116B971905, P116B000659).

  14. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.

  15. TV Audience Measurement with Big Data.

    PubMed

    Hill, Shawndra

    2014-06-01

    TV audience measurement involves estimating the number of viewers tuned into a TV show at any given time as well as their demographics. First introduced shortly after commercial television broadcasting began in the late 1940s, audience measurement allowed the business of television to flourish by offering networks a way to quantify the monetary value of TV audiences for advertisers, who pay for the estimated number of eyeballs watching during commercials. The first measurement techniques suffered from multiple limitations because reliable, large-scale data were costly to acquire. Yet despite these limitations, measurement standards remained largely unchanged for decades until devices such as cable boxes, video-on-demand boxes, and cell phones, as well as web apps, Internet browser clicks, web queries, and social media activity, resulted in an explosion of digitally available data. TV viewers now leave digital traces that can be used to track almost every aspect of their daily lives, allowing the potential for large-scale aggregation across data sources for individual users and groups and enabling the tracking of more people on more dimensions for more shows. Data are now more comprehensive, available in real time, and cheaper to acquire, enabling accurate and fine-grained TV audience measurement. In this article, I discuss the evolution of audience measurement and what the recent data explosion means for the TV industry and academic research.

  16. Lateral and feedforward inhibition suppress asynchronous activity in a large, biophysically-detailed computational model of the striatal network

    PubMed Central

    Moyer, Jason T.; Halterman, Benjamin L.; Finkel, Leif H.; Wolf, John A.

    2014-01-01

    Striatal medium spiny neurons (MSNs) receive lateral inhibitory projections from other MSNs and feedforward inhibitory projections from fast-spiking, parvalbumin-containing striatal interneurons (FSIs). The functional roles of these connections are unknown, and difficult to study in an experimental preparation. We therefore investigated the functionality of both lateral (MSN-MSN) and feedforward (FSI-MSN) inhibition using a large-scale computational model of the striatal network. The model consists of 2744 MSNs comprised of 189 compartments each and 121 FSIs comprised of 148 compartments each, with dendrites explicitly represented and almost all known ionic currents included and strictly constrained by biological data as appropriate. Our analysis of the model indicates that both lateral inhibition and feedforward inhibition function at the population level to limit non-ensemble MSN spiking while preserving ensemble MSN spiking. Specifically, lateral inhibition enables large ensembles of MSNs firing synchronously to strongly suppress non-ensemble MSNs over a short time-scale (10–30 ms). Feedforward inhibition enables FSIs to strongly inhibit weakly activated, non-ensemble MSNs while moderately inhibiting activated ensemble MSNs. Importantly, FSIs appear to more effectively inhibit MSNs when FSIs fire asynchronously. Both types of inhibition would increase the signal-to-noise ratio of responding MSN ensembles and contribute to the formation and dissolution of MSN ensembles in the striatal network. PMID:25505406

  17. Materials Integration and Doping of Carbon Nanotube-based Logic Circuits

    NASA Astrophysics Data System (ADS)

    Geier, Michael

    Over the last 20 years, extensive research into the structure and properties of single- walled carbon nanotube (SWCNT) has elucidated many of the exceptional qualities possessed by SWCNTs, including record-setting tensile strength, excellent chemical stability, distinctive optoelectronic features, and outstanding electronic transport characteristics. In order to exploit these remarkable qualities, many application-specific hurdles must be overcome before the material can be implemented in commercial products. For electronic applications, recent advances in sorting SWCNTs by electronic type have enabled significant progress towards SWCNT-based integrated circuits. Despite these advances, demonstrations of SWCNT-based devices with suitable characteristics for large-scale integrated circuits have been limited. The processing methodologies, materials integration, and mechanistic understanding of electronic properties developed in this dissertation have enabled unprecedented scales of SWCNT-based transistor fabrication and integrated circuit demonstrations. Innovative materials selection and processing methods are at the core of this work and these advances have led to transistors with the necessary transport properties required for modern circuit integration. First, extensive collaborations with other research groups allowed for the exploration of SWCNT thin-film transistors (TFTs) using a wide variety of materials and processing methods such as new dielectric materials, hybrid semiconductor materials systems, and solution-based printing of SWCNT TFTs. These materials were integrated into circuit demonstrations such as NOR and NAND logic gates, voltage-controlled ring oscillators, and D-flip-flops using both rigid and flexible substrates. This dissertation explores strategies for implementing complementary SWCNT-based circuits, which were developed by using local metal gate structures that achieve enhancement-mode p-type and n-type SWCNT TFTs with widely separated and symmetric threshold voltages. Additionally, a novel n-type doping procedure for SWCNT TFTs was also developed utilizing a solution-processed organometallic small molecule to demonstrate the first network top-gated n-type SWCNT TFTs. Lastly, new doping and encapsulation layers were incorporated to stabilize both p-type and n-type SWCNT TFT electronic properties, which enabled the fabrication of large-scale memory circuits. Employing these materials and processing advances has addressed many application specific barriers to commercialization. For instance, the first thin-film SWCNT complementary metal-oxide-semi-conductor (CMOS) logic devices are demonstrated with sub-nanowatt static power consumption and full rail-to-rail voltage transfer characteristics. With the introduction of a new n-type Rh-based molecular dopant, the first SWCNT TFTs are fabricated in top-gate geometries over large areas with high yield. Then by utilizing robust encapsulation methods, stable and uniform electronic performance of both p-type and n-type SWCNT TFTs has been achieved. Based on these complementary SWCNT TFTs, it is possible to simulate, design, and fabricate arrays of low-power static random access memory (SRAM) circuits, achieving large-scale integration for the first time based on solution-processed semiconductors. Together, this work provides a direct pathway for solution processable, large scale, power-efficient advanced integrated logic circuits and systems.

  18. geoknife: Reproducible web-processing of large gridded datasets

    USGS Publications Warehouse

    Read, Jordan S.; Walker, Jordan I.; Appling, Alison P.; Blodgett, David L.; Read, Emily K.; Winslow, Luke A.

    2016-01-01

    Geoprocessing of large gridded data according to overlap with irregular landscape features is common to many large-scale ecological analyses. The geoknife R package was created to facilitate reproducible analyses of gridded datasets found on the U.S. Geological Survey Geo Data Portal web application or elsewhere, using a web-enabled workflow that eliminates the need to download and store large datasets that are reliably hosted on the Internet. The package provides access to several data subset and summarization algorithms that are available on remote web processing servers. Outputs from geoknife include spatial and temporal data subsets, spatially-averaged time series values filtered by user-specified areas of interest, and categorical coverage fractions for various land-use types.

  19. Sparse Regression Based Structure Learning of Stochastic Reaction Networks from Single Cell Snapshot Time Series.

    PubMed

    Klimovskaia, Anna; Ganscha, Stefan; Claassen, Manfred

    2016-12-01

    Stochastic chemical reaction networks constitute a model class to quantitatively describe dynamics and cell-to-cell variability in biological systems. The topology of these networks typically is only partially characterized due to experimental limitations. Current approaches for refining network topology are based on the explicit enumeration of alternative topologies and are therefore restricted to small problem instances with almost complete knowledge. We propose the reactionet lasso, a computational procedure that derives a stepwise sparse regression approach on the basis of the Chemical Master Equation, enabling large-scale structure learning for reaction networks by implicitly accounting for billions of topology variants. We have assessed the structure learning capabilities of the reactionet lasso on synthetic data for the complete TRAIL induced apoptosis signaling cascade comprising 70 reactions. We find that the reactionet lasso is able to efficiently recover the structure of these reaction systems, ab initio, with high sensitivity and specificity. With only < 1% false discoveries, the reactionet lasso is able to recover 45% of all true reactions ab initio among > 6000 possible reactions and over 102000 network topologies. In conjunction with information rich single cell technologies such as single cell RNA sequencing or mass cytometry, the reactionet lasso will enable large-scale structure learning, particularly in areas with partial network structure knowledge, such as cancer biology, and thereby enable the detection of pathological alterations of reaction networks. We provide software to allow for wide applicability of the reactionet lasso.

  20. Point contact tunneling spectroscopy apparatus for large scale mapping of surface superconducting properties

    DOE PAGES

    Groll, Nickolas; Pellin, Michael J.; Zasadzinksi, John F.; ...

    2015-09-18

    In this paper, we describe the design and testing of a point contact tunneling spectroscopy device that can measure material surface superconducting properties (i.e., the superconducting gap Δ and the critical temperature T C) and density of states over large surface areas with size up to mm 2. The tip lateral (X,Y) motion, mounted on a (X,Y,Z) piezo-stage, was calibrated on a patterned substrate consisting of Nb lines sputtered on a gold film using both normal (Al) and superconducting (PbSn) tips at 1.5 K. The tip vertical (Z) motion control enables some adjustment of the tip-sample junction resistance that canmore » be measured over 7 orders of magnitudes from a quasi-ohmic regime (few hundred Ω) to the tunnel regime (from tens of kΩ up to few GΩ). The low noise electronic and LabVIEW program interface are also presented. Finally, the point contact regime and the large-scale motion capabilities are of particular interest for mapping and testing the superconducting properties of macroscopic scale superconductor-based devices.« less

  1. Large-scale filament formation inhibits the activity of CTP synthetase

    PubMed Central

    Barry, Rachael M; Bitbol, Anne-Florence; Lorestani, Alexander; Charles, Emeric J; Habrian, Chris H; Hansen, Jesse M; Li, Hsin-Jung; Baldwin, Enoch P; Wingreen, Ned S; Kollman, Justin M; Gitai, Zemer

    2014-01-01

    CTP Synthetase (CtpS) is a universally conserved and essential metabolic enzyme. While many enzymes form small oligomers, CtpS forms large-scale filamentous structures of unknown function in prokaryotes and eukaryotes. By simultaneously monitoring CtpS polymerization and enzymatic activity, we show that polymerization inhibits activity, and CtpS's product, CTP, induces assembly. To understand how assembly inhibits activity, we used electron microscopy to define the structure of CtpS polymers. This structure suggests that polymerization sterically hinders a conformational change necessary for CtpS activity. Structure-guided mutagenesis and mathematical modeling further indicate that coupling activity to polymerization promotes cooperative catalytic regulation. This previously uncharacterized regulatory mechanism is important for cellular function since a mutant that disrupts CtpS polymerization disrupts E. coli growth and metabolic regulation without reducing CTP levels. We propose that regulation by large-scale polymerization enables ultrasensitive control of enzymatic activity while storing an enzyme subpopulation in a conformationally restricted form that is readily activatable. DOI: http://dx.doi.org/10.7554/eLife.03638.001 PMID:25030911

  2. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  3. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  4. Spatial Modeling and Uncertainty Assessment of Fine Scale Surface Processes Based on Coarse Terrain Elevation Data

    NASA Astrophysics Data System (ADS)

    Rasera, L. G.; Mariethoz, G.; Lane, S. N.

    2017-12-01

    Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.

  5. The impact of new forms of large-scale general practice provider collaborations on England's NHS: a systematic review.

    PubMed

    Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel

    2018-03-01

    Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.

  6. The influence of control parameter estimation on large scale geomorphological interpretation of pointclouds

    NASA Astrophysics Data System (ADS)

    Dorninger, P.; Koma, Z.; Székely, B.

    2012-04-01

    In recent years, laser scanning, also referred to as LiDAR, has proved to be an important tool for topographic data acquisition. Basically, laser scanning acquires a more or less homogeneously distributed point cloud. These points represent all natural objects like terrain and vegetation as well as man-made objects such as buildings, streets, powerlines, or other constructions. Due to the enormous amount of data provided by current scanning systems capturing up to several hundred thousands of points per second, the immediate application of such point clouds for large scale interpretation and analysis is often prohibitive due to restrictions of the hard- and software infrastructure. To overcome this, numerous methods for the determination of derived products do exist. Commonly, Digital Terrain Models (DTM) or Digital Surface Models (DSM) are derived to represent the topography using a regular grid as datastructure. The obvious advantages are a significant reduction of the amount of data and the introduction of an implicit neighborhood topology enabling the application of efficient post processing methods. The major disadvantages are the loss of 3D information (i.e. overhangs) as well as the loss of information due to the interpolation approach used. We introduced a segmentation approach enabling the determination of planar structures within a given point cloud. It was originally developed for the purpose of building modeling but has proven to be well suited for large scale geomorphological analysis as well. The result is an assignment of the original points to a set of planes. Each plane is represented by its plane parameters. Additionally, numerous quality and quantity parameters are determined (e.g. aspect, slope, local roughness, etc.). In this contribution, we investigate the influence of the control parameters required for the plane segmentation on the geomorphological interpretation of the derived product. The respective control parameters may be determined either automatically (i.e. estimated of the given data) or manually (i.e. supervised parameter estimation). Additionally, the result might be influenced if data processing is performed locally (i.e. using tiles) or globally. Local processing of the data has the advantages of generally performing faster, having less hardware requirements, and enabling the determination of more detailed information. By contrast, especially in geomorphological interpretation, a global data processing enables determining large scale relations within the dataset analyzed. We investigated the influence of control parameter settings on the geomorphological interpretation on airborne and terrestrial laser scanning data sets of the landslide at Doren (Vorarlberg, Austria), on airborne laser scanning data of the western cordilleras of the central Andes, and on HRSC terrain data of the Mars surface. Topics discussed are the suitability of automated versus manual determination of control parameters, the influence of the definition of the area of interest (local versus global application) as well as computational performance.

  7. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  8. Ancient geodynamics and global-scale hydrology on Mars.

    PubMed

    Phillips, R J; Zuber, M T; Solomon, S C; Golombek, M P; Jakosky, B M; Banerdt, W B; Smith, D E; Williams, R M; Hynek, B M; Aharonson, O; Hauck , S A

    2001-03-30

    Loading of the lithosphere of Mars by the Tharsis rise explains much of the global shape and long-wavelength gravity field of the planet, including a ring of negative gravity anomalies and a topographic trough around Tharsis, as well as gravity anomaly and topographic highs centered in Arabia Terra and extending northward toward Utopia. The Tharsis-induced trough and antipodal high were largely in place by the end of the Noachian Epoch and exerted control on the location and orientation of valley networks. The release of carbon dioxide and water accompanying the emplacement of approximately 3 x 10(8) cubic kilometers of Tharsis magmas may have sustained a warmer climate than at present, enabling the formation of ancient valley networks and fluvial landscape denudation in and adjacent to the large-scale trough.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  10. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  11. ViDI: Virtual Diagnostics Interface. Volume 2; Unified File Format and Web Services as Applied to Seamless Data Transfer

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A. (Technical Monitor); Schwartz, Richard J.

    2004-01-01

    The desire to revolutionize the aircraft design cycle from its currently lethargic pace to a fast turn-around operation enabling the optimization of non-traditional configurations is a critical challenge facing the aeronautics industry. In response, a large scale effort is underway to not only advance the state of the art in wind tunnel testing, computational modeling, and information technology, but to unify these often disparate elements into a cohesive design resource. This paper will address Seamless Data Transfer, the critical central nervous system that will enable a wide variety of varied components to work together.

  12. CloudMan as a platform for tool, data, and analysis distribution.

    PubMed

    Afgan, Enis; Chapman, Brad; Taylor, James

    2012-11-27

    Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions.

  13. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  14. Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  15. Power monitoring and control for large scale projects: SKA, a case study

    NASA Astrophysics Data System (ADS)

    Barbosa, Domingos; Barraca, João. Paulo; Maia, Dalmiro; Carvalho, Bruno; Vieira, Jorge; Swart, Paul; Le Roux, Gerhard; Natarajan, Swaminathan; van Ardenne, Arnold; Seca, Luis

    2016-07-01

    Large sensor-based science infrastructures for radio astronomy like the SKA will be among the most intensive datadriven projects in the world, facing very high demanding computation, storage, management, and above all power demands. The geographically wide distribution of the SKA and its associated processing requirements in the form of tailored High Performance Computing (HPC) facilities, require a Greener approach towards the Information and Communications Technologies (ICT) adopted for the data processing to enable operational compliance to potentially strict power budgets. Addressing the reduction of electricity costs, improve system power monitoring and the generation and management of electricity at system level is paramount to avoid future inefficiencies and higher costs and enable fulfillments of Key Science Cases. Here we outline major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science and Smart power monitoring and control.

  16. Integrating Data from GRACE and Other Observing Systems for Hydrological Research and Applications

    NASA Technical Reports Server (NTRS)

    Rodell, M.; Famiglietti, J. S.; McWilliams, E.; Beaudoing, H. K.; Li, B.; Zaitchik, B.; Reichle, R.; Bolten, J.

    2011-01-01

    The Gravity Recovery and Climate Experiment (GRACE) mission provides a unique view of water cycle dynamics, enabling the only space based observations of water on and beneath the land surface that are not limited by depth. GRACE data are immediately useful for large scale applications such as ice sheet ablation monitoring, but they are even more valuable when combined with other types of observations, either directly or within a data assimilation system. Here we describe recent results of hydrological research and applications projects enabled by GRACE. These include the following: 1) global monitoring of interannual variability of terrestrial water storage and groundwater; 2) water balance estimates of evapotranspiration over several large river basins; 3) NASA's Energy and Water Cycle Study (NEWS) state of the global water budget project; 4) drought indicator products now being incorporated into the U.S. Drought Monitor; 5) GRACE data assimilation over several regions.

  17. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-12-01

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. However, these metaphors (e.g., word clouds, tag clouds, etc.) often lack interactivity to explore the information and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. Further, transitioning between levels of detail (i.e., from terms to full documents) can be challanging. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods, Typograh enables multipel levels of detailmore » (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geography metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  18. High-frequency self-aligned graphene transistors with transferred gate stacks.

    PubMed

    Cheng, Rui; Bai, Jingwei; Liao, Lei; Zhou, Hailong; Chen, Yu; Liu, Lixin; Lin, Yung-Chen; Jiang, Shan; Huang, Yu; Duan, Xiangfeng

    2012-07-17

    Graphene has attracted enormous attention for radio-frequency transistor applications because of its exceptional high carrier mobility, high carrier saturation velocity, and large critical current density. Herein we report a new approach for the scalable fabrication of high-performance graphene transistors with transferred gate stacks. Specifically, arrays of gate stacks are first patterned on a sacrificial substrate, and then transferred onto arbitrary substrates with graphene on top. A self-aligned process, enabled by the unique structure of the transferred gate stacks, is then used to position precisely the source and drain electrodes with minimized access resistance or parasitic capacitance. This process has therefore enabled scalable fabrication of self-aligned graphene transistors with unprecedented performance including a record-high cutoff frequency up to 427 GHz. Our study defines a unique pathway to large-scale fabrication of high-performance graphene transistors, and holds significant potential for future application of graphene-based devices in ultra-high-frequency circuits.

  19. Adiabatic quantum-flux-parametron cell library adopting minimalist design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeuchi, Naoki, E-mail: takeuchi-naoki-kx@ynu.jp; Yamanashi, Yuki; Yoshikawa, Nobuyuki

    We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells inmore » the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.« less

  20. Adiabatic quantum-flux-parametron cell library adopting minimalist design

    NASA Astrophysics Data System (ADS)

    Takeuchi, Naoki; Yamanashi, Yuki; Yoshikawa, Nobuyuki

    2015-05-01

    We herein build an adiabatic quantum-flux-parametron (AQFP) cell library adopting minimalist design and a symmetric layout. In the proposed minimalist design, every logic cell is designed by arraying four types of building block cells: buffer, NOT, constant, and branch cells. Therefore, minimalist design enables us to effectively build and customize an AQFP cell library. The symmetric layout reduces unwanted parasitic magnetic coupling and ensures a large mutual inductance in an output transformer, which enables very long wiring between logic cells. We design and fabricate several logic circuits using the minimal AQFP cell library so as to test logic cells in the library. Moreover, we experimentally investigate the maximum wiring length between logic cells. Finally, we present an experimental demonstration of an 8-bit carry look-ahead adder designed using the minimal AQFP cell library and demonstrate that the proposed cell library is sufficiently robust to realize large-scale digital circuits.

  1. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    PubMed

    Rutledge, Robert G

    2011-03-02

    Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  2. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  3. ARQiv-HTS, a versatile whole-organism screening platform enabling in vivo drug discovery at high-throughput rates

    PubMed Central

    White, David T; Eroglu, Arife Unal; Wang, Guohua; Zhang, Liyun; Sengupta, Sumitra; Ding, Ding; Rajpurohit, Surendra K; Walker, Steven L; Ji, Hongkai; Qian, Jiang; Mumm, Jeff S

    2017-01-01

    The zebrafish has emerged as an important model for whole-organism small-molecule screening. However, most zebrafish-based chemical screens have achieved only mid-throughput rates. Here we describe a versatile whole-organism drug discovery platform that can achieve true high-throughput screening (HTS) capacities. This system combines our automated reporter quantification in vivo (ARQiv) system with customized robotics, and is termed ‘ARQiv-HTS’. We detail the process of establishing and implementing ARQiv-HTS: (i) assay design and optimization, (ii) calculation of sample size and hit criteria, (iii) large-scale egg production, (iv) automated compound titration, (v) dispensing of embryos into microtiter plates, and (vi) reporter quantification. We also outline what we see as best practice strategies for leveraging the power of ARQiv-HTS for zebrafish-based drug discovery, and address technical challenges of applying zebrafish to large-scale chemical screens. Finally, we provide a detailed protocol for a recently completed inaugural ARQiv-HTS effort, which involved the identification of compounds that elevate insulin reporter activity. Compounds that increased the number of insulin-producing pancreatic beta cells represent potential new therapeutics for diabetic patients. For this effort, individual screening sessions took 1 week to conclude, and sessions were performed iteratively approximately every other day to increase throughput. At the conclusion of the screen, more than a half million drug-treated larvae had been evaluated. Beyond this initial example, however, the ARQiv-HTS platform is adaptable to almost any reporter-based assay designed to evaluate the effects of chemical compounds in living small-animal models. ARQiv-HTS thus enables large-scale whole-organism drug discovery for a variety of model species and from numerous disease-oriented perspectives. PMID:27831568

  4. A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification

    PubMed Central

    Rutledge, Robert G.

    2011-01-01

    Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812

  5. Dynamics of Large-scale Coronal Structures as Imaged during the 2012 and 2013 Total Solar Eclipses

    NASA Astrophysics Data System (ADS)

    Alzate, Nathalia; Habbal, Shadia R.; Druckmüller, Miloslav; Emmanouilidis, Constantinos; Morgan, Huw

    2017-10-01

    White light images acquired at the peak of solar activity cycle 24, during the total solar eclipses of 2012 November 13 and 2013 November 3, serendipitously captured erupting prominences accompanied by CMEs. Application of state-of-the-art image processing techniques revealed the intricate details of two “atypical” large-scale structures, with strikingly sharp boundaries. By complementing the processed white light eclipse images with processed images from co-temporal Solar Dynamics Observatory/AIA and SOHO/LASCO observations, we show how the shape of these atypical structures matches the shape of faint CME shock fronts, which traversed the inner corona a few hours prior to the eclipse observations. The two events were not associated with any prominence eruption but were triggered by sudden brightening events on the solar surface accompanied by sprays and jets. The discovery of the indelible impact that frequent and innocuous transient events in the low corona can have on large-scale coronal structures was enabled by the radial span of the high-resolution white light eclipse images, starting from the solar surface out to several solar radii, currently unmatched by any coronagraphic instrumentation. These findings raise the interesting question as to whether large-scale coronal structures can ever be considered stationary. They also point to the existence of a much larger number of CMEs that goes undetected from the suite of instrumentation currently observing the Sun.

  6. Community-based native seed production for restoration in Brazil - the role of science and policy.

    PubMed

    Schmidt, I B; de Urzedo, D I; Piña-Rodrigues, F C M; Vieira, D L M; de Rezende, G M; Sampaio, A B; Junqueira, R G P

    2018-05-20

    Large-scale restoration programmes in the tropics require large volumes of high quality, genetically diverse and locally adapted seeds from a large number of species. However, scarcity of native seeds is a critical restriction to achieve restoration targets. In this paper, we analyse three successful community-based networks that supply native seeds and seedlings for Brazilian Amazon and Cerrado restoration projects. In addition, we propose directions to promote local participation, legal, technical and commercialisation issues for up-scaling the market of native seeds for restoration with high quality and social justice. We argue that effective community-based restoration arrangements should follow some principles: (i) seed production must be based on real market demand; (ii) non-governmental and governmental organisations have a key role in supporting local organisation, legal requirements and selling processes; (iii) local ecological knowledge and labour should be valued, enabling local communities to promote large-scale seed production; (iv) applied research can help develop appropriate techniques and solve technical issues. The case studies from Brazil and principles presented here can be useful for the up-scaling restoration ecology efforts in many other parts of the world and especially in tropical countries where improving rural community income is a strategy for biodiversity conservation and restoration. © 2018 German Society for Plant Sciences and The Royal Botanical Society of the Netherlands.

  7. A Laser Interferometric Miniature Seismometer

    DTIC Science & Technology

    2010-09-01

    A LASER INTERFEROMETRIC MINIATURE SEISMOMETER Dustin W. Carr, Patrick C. Baldwin, Shawn A. Knapp-Kleinsorge, Howard Milburn, and David Robinson...Symphony Acoustics, Inc. Sponsored by the National Nuclear Security Administration Award No. DE-FG02-08ER85108.001 ABSTRACT The threat of...performance, compact device can enable rapid deployment of large-scale arrays , which can in turn be used to provide higher-quality data during times of

  8. A Holistic Redundancy- and Incentive-Based Framework to Improve Content Availability in Peer-to-Peer Networks

    ERIC Educational Resources Information Center

    Herrera-Ruiz, Octavio

    2012-01-01

    Peer-to-Peer (P2P) technology has emerged as an important alternative to the traditional client-server communication paradigm to build large-scale distributed systems. P2P enables the creation, dissemination and access to information at low cost and without the need of dedicated coordinating entities. However, existing P2P systems fail to provide…

  9. From a meso- to micro-scale connectome: array tomography and mGRASP

    PubMed Central

    Rah, Jong-Cheol; Feng, Linqing; Druckmann, Shaul; Lee, Hojin; Kim, Jinhyun

    2015-01-01

    Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT) and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP) can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing), combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors. PMID:26089781

  10. Imaging Exoplanets with the Exo-S Starshade Mission: Key Enabling Technologies

    NASA Astrophysics Data System (ADS)

    Kasdin, N. Jeremy; Lisman, Doug; Shaklan, Stuart; Thomson, Mark; Webb, David; Cady, Eric; Exo-S Science; Technology Definition Team, Exoplanet Program Probe Study Design Team

    2015-01-01

    There is increasing interest in the use of a starshade, a spacecraft employing a large screen flying in formation with a space telescope, for providing the starlight suppression needed to detect and characterize exoplanets. In particular, Exo-S is a NASA study directed at designing a probe-scale exoplanet mission employing a starshade. In this poster we present the enabling technologies needed to make a starshade mission a reality: flight-like petals, a deployable truss to support the petals, optical edges, optical diffraction studies, and formation sensing and control. We show the status of each technology gap and summarize our progress over the past 5 years with plans for the next 3 years in demonstrating feasibility in all these areas. In particular, since no optical end-to-end test is possible, it is necessary to both show that a starshade can be built and deployed to the required accuracy and, via laboratory experiments at smaller scale, that the optical modeling upon which the accuracy requirements are based is validated. We show our progress verifying key enabling technologies, including demonstrating that a starshade petal made from flight-like materials can be manufactured to the needed accuracy and that a central truss with attached petals can be deployed with the needed precision. We also summarize our sub-scale lab experiments that demonstrate we can achieve the contrast predicted by our optical models.

  11. Framework for Informed Policy Making Using Data from National Environmental Observatories

    NASA Astrophysics Data System (ADS)

    Wee, B.; Taylor, J. R.; Poinsatte, J.

    2012-12-01

    Large-scale environmental changes pose challenges that straddle environmental, economic, and social boundaries. As we design and implement climate adaptation strategies at the Federal, state, local, and tribal levels, accessible and usable data are essential for implementing actions that are informed by the best available information. Data-intensive science has been heralded as an enabler for scientific breakthroughs powered by advanced computing capabilities and interoperable data systems. Those same capabilities can be applied to data and information systems that facilitate the transformation of data into highly processed products. At the interface of scientifically informed public policy and data intensive science lies the potential for producers of credible, integrated, multi-scalar environmental data like the National Ecological Observatory Network (NEON) and its partners to capitalize on data and informatics interoperability initiatives that enable the integration of environmental data from across credible data sources. NSF's large-scale environmental observatories such as NEON and the Ocean Observatories Initiative (OOI) are designed to provide high-quality, long-term environmental data for research. These data are also meant to be repurposed for operational needs that like risk management, vulnerability assessments, resource management, and others. The proposed USDA Agriculture Research Service (ARS) Long Term Agro-ecosystem Research (LTAR) network is another example of such an environmental observatory that will produce credible data for environmental / agricultural forecasting and informing policy. To facilitate data fusion across observatories, there is a growing call for observation systems to more closely coordinate and standardize how variables are measured. Together with observation standards, cyberinfrastructure standards enable the proliferation of an ecosystem of applications that utilize diverse, high-quality, credible data. Interoperability facilitates the integration of data from multiple credible sources of data, and enables the repurposing of data for use at different geographical scales. Metadata that captures the transformation of data into value-added products ("provenance") lends reproducability and transparency to the entire process. This way, the datasets and model code used to create any product can be examined by other parties. This talk outlines a pathway for transforming environmental data into value-added products by various stakeholders to better inform sustainable agriculture using data from environmental observatories including NEON and LTAR.;

  12. Implementation of the Agitated Behavior Scale in the Electronic Health Record.

    PubMed

    Wilson, Helen John; Dasgupta, Kritis; Michael, Kathleen

    The purpose of the study was to implement an Agitated Behavior Scale through an electronic health record and to evaluate the usability of the scale in a brain injury unit at a rehabilitation hospital. A quality improvement project was conducted in the brain injury unit at a large rehabilitation hospital with registered nurses as participants using convenience sampling. The project consisted of three phases and included education, implementation of the scale in the electronic health record, and administration of the survey questionnaire, which utilized the system usability scale. The Agitated Behavior Scale was found to be usable, and there was 92.2% compliance with the use of the electronic Electronic Agitated Behavior Scale. The Agitated Behavior Scale was effectively implemented in the electronic health record and was found to be usable in the assessment of agitation. Utilization of the scale through the electronic health record on a daily basis will allow for an early identification of agitation in patients with traumatic brain injury and enable prompt interventions to manage agitation.

  13. Live immunization against East Coast fever--current status.

    PubMed

    Di Giulio, Giuseppe; Lynen, Godelieve; Morzaria, Subhash; Oura, Chris; Bishop, Richard

    2009-02-01

    The infection-and-treatment method (ITM) for immunization of cattle against East Coast fever has historically been used only on a limited scale because of logistical and policy constraints. Recent large-scale deployment among pastoralists in Tanzania has stimulated demand. Concurrently, a suite of molecular tools, developed from the Theileria parva genome, has enabled improved quality control of the immunizing stabilate and post-immunization monitoring of the efficacy and biological impact of ITM in the field. This article outlines the current status of ITM immunization in the field, with associated developments in the molecular epidemiology of T. parva.

  14. Ethanol for a sustainable energy future.

    PubMed

    Goldemberg, José

    2007-02-09

    Renewable energy is one of the most efficient ways to achieve sustainable development. Increasing its share in the world matrix will help prolong the existence of fossil fuel reserves, address the threats posed by climate change, and enable better security of the energy supply on a global scale. Most of the "new renewable energy sources" are still undergoing large-scale commercial development, but some technologies are already well established. These include Brazilian sugarcane ethanol, which, after 30 years of production, is a global energy commodity that is fully competitive with motor gasoline and appropriate for replication in many countries.

  15. Small-scale disequilibrium in a magmatic inclusion and its more silicic host

    NASA Technical Reports Server (NTRS)

    Davidson, Jon P.; Holden, Peter; Halliday, Alex N.; De Silva, Shanaka L.

    1990-01-01

    An investigation of small-scale isotopic, compositional, and mineralogical variation across the interface of a basaltic-andesite inclusion and its dacitic host from Cerro-Chascon, a Holocene dome in northern Chile, is discussed. Serial sectioning across the interface of the inclusion and its host dacite, complemented by microdrill sampling and detailed microprobe work, has enabled an examination of the scale of mixing and chemical disequilibrium. The composition of the inclusion is found to be relatively homogeneous; the dacite host is heterogeneous on a small scale; the isotopic composition in the marginal zone shows the highest Sr-87/Sr-86 and lowest Nd-143/Nd-144; the large plagioclase crystals in the inclusions and host are xenocrystic. These differences are reconciled with a model of magma evolution in a crustal magma chamber.

  16. Exploring Potential of Crowdsourced Geographic Information in Studies of Active Travel and Health: Strava Data and Cycling Behaviour

    NASA Astrophysics Data System (ADS)

    Sun, Y.

    2017-09-01

    In development of sustainable transportation and green city, policymakers encourage people to commute by cycling and walking instead of motor vehicles in cities. One the one hand, cycling and walking enables decrease in air pollution emissions. On the other hand, cycling and walking offer health benefits by increasing people's physical activity. Earlier studies on investigating spatial patterns of active travel (cycling and walking) are limited by lacks of spatially fine-grained data. In recent years, with the development of information and communications technology, GPS-enabled devices are popular and portable. With smart phones or smart watches, people are able to record their cycling or walking GPS traces when they are moving. A large number of cyclists and pedestrians upload their GPS traces to sport social media to share their historical traces with other people. Those sport social media thus become a potential source for spatially fine-grained cycling and walking data. Very recently, Strava Metro offer aggregated cycling and walking data with high spatial granularity. Strava Metro aggregated a large amount of cycling and walking GPS traces of Strava users to streets or intersections across a city. Accordingly, as a kind of crowdsourced geographic information, the aggregated data is useful for investigating spatial patterns of cycling and walking activities, and thus is of high potential in understanding cycling or walking behavior at a large spatial scale. This study is a start of demonstrating usefulness of Strava Metro data for exploring cycling or walking patterns at a large scale.

  17. Intact mass detection, interpretation, and visualization to automate Top-Down proteomics on a large scale

    PubMed Central

    Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.

    2011-01-01

    Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673

  18. The scientific targets of the SCOPE mission

    NASA Astrophysics Data System (ADS)

    Fujimoto, M.; Saito, Y.; Tsuda, Y.; Shinohara, I.; Kojima, H.

    Future Japanese magnetospheric mission "SCOPE" is now under study (planned to be launched in 2012). The main purpose of this mission is to investigate the dynamic behaviors of plasmas in the Earth's magnetosphere from the view-point of cross-scale coupling. Dynamical collisionless space plasma phenomena, be they large scale as a whole, are chracterized by coupling over various time and spatial scales. The best example would be the magnetic reconnection process, which is a large scale energy conversion process but has a small key region at the heart of its engine. Inside the key region, electron scale dynamics plays the key role in liberating the frozen-in constraint, by which reconnection is allowed to proceed. The SCOPE mission is composed of one large mother satellite and four small daughter satellites. The mother spacecraft will be equiped with the electron detector that has 10 msec time resolution so that scales down to the electron's will be resolved. Three of the four daughter satellites surround the mother satellite 3-dimensionally with the mutual distances between several km and several thousand km, which are varied during the mission. Plasma measurements on these spacecrafts will have 1 sec resolution and will provide information on meso-scale plasma structure. The fourth daughter satellite stays near the mother satellite with the distance less than 100km. By correlation between the two plasma wave instruments on the daughter and the mother spacecrafts, propagation of the waves and the information on the electron scale dynamics will be obtained. By this strategy, both meso- and micro-scale information on dynamics are obtained, that will enable us to investigate the physics of the space plasma from the cross-scale coupling point of view.

  19. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  20. Simulating Nationwide Pandemics: Applying the Multi-scale Epidemiologic Simulation and Analysis System to Human Infectious Diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dombroski, M; Melius, C; Edmunds, T

    2008-09-24

    This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less

  1. Camphor-Enabled Transfer and Mechanical Testing of Centimeter-Scale Ultrathin Films.

    PubMed

    Wang, Bin; Luo, Da; Li, Zhancheng; Kwon, Youngwoo; Wang, Meihui; Goo, Min; Jin, Sunghwan; Huang, Ming; Shen, Yongtao; Shi, Haofei; Ding, Feng; Ruoff, Rodney S

    2018-05-21

    Camphor is used to transfer centimeter-scale ultrathin films onto custom-designed substrates for mechanical (tensile) testing. Compared to traditional transfer methods using dissolving/peeling to remove the support-layers, camphor is sublimed away in air at low temperature, thereby avoiding additional stress on the as-transferred films. Large-area ultrathin films can be transferred onto hollow substrates without damage by this method. Tensile measurements are made on centimeter-scale 300 nm-thick graphene oxide film specimens, much thinner than the ≈2 μm minimum thickness of macroscale graphene-oxide films previously reported. Tensile tests were also done on two different types of large-area samples of adlayer free CVD-grown single-layer graphene supported by a ≈100 nm thick polycarbonate film; graphene stiffens this sample significantly, thus the intrinsic mechanical response of the graphene can be extracted. This is the first tensile measurement of centimeter-scale monolayer graphene films. The Young's modulus of polycrystalline graphene ranges from 637 to 793 GPa, while for near single-crystal graphene, it ranges from 728 to 908 GPa (folds parallel to the tensile loading direction) and from 683 to 775 GPa (folds orthogonal to the tensile loading direction), demonstrating the mechanical performance of large-area graphene in a size scale relevant to many applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Moving contact lines on vibrating surfaces

    NASA Astrophysics Data System (ADS)

    Solomenko, Zlatko; Spelt, Peter; Scott, Julian

    2017-11-01

    Large-scale simulations of flows with moving contact lines for realistic conditions generally requires a subgrid scale model (analyses based on matched asymptotics) to account for the unresolved part of the flow, given the large range of length scales involved near contact lines. Existing models for the interface shape in the contact-line region are primarily for steady flows on homogeneous substrates, with encouraging results in 3D simulations. Introduction of complexities would require further investigation of the contact-line region, however. Here we study flows with moving contact lines on planar substrates subject to vibrations, with applications in controlling wetting/dewetting. The challenge here is to determine the change in interface shape near contact lines due to vibrations. To develop further insight, 2D direct numerical simulations (wherein the flow is resolved down to an imposed slip length) have been performed to enable comparison with asymptotic theory, which is also developed further. Perspectives will also be presented on the final objective of the work, which is to develop a subgrid scale model that can be utilized in large-scale simulations. The authors gratefully acknowledge the ANR for financial support (ANR-15-CE08-0031) and the meso-centre FLMSN for use of computational resources. This work was Granted access to the HPC resources of CINES under the allocation A0012B06893 made by GENCI.

  3. Formation Flying and the Stellar Imager Mission Concept

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.

    2003-01-01

    The Stellar Imager (SI) is envisioned as a space-based, W-optical interferometer composed of 10 or more one-meter class elements distributed with a maximum baseline of 0.5 km. image stars and binaries with sufficient resolution to enable long-term studies of stellar magnetic activity patterns, for comparison with those on the sun. It will also support asteroseismology (acoustic imaging) to probe stellar internal structure, differential rotation, and large-scale circulations. SI will enable us to understand the various effects of the magnetic fields of stars, the dynamos that generate these fields, and the internal structure and dynamics of the stars. The ultimate goal of the mission is to achieve the best-possible forecasting of solar activity as a driver of climate and space weather on time scales ranging from months up to decades, and an understanding of the impact of stellar magnetic activity on life in the Universe. In this paper we briefly describe the scientific goals of the mission, the performance requirements needed to address these goals, and the "enabling technology" development efforts required, with specific attention for this meeting to the formation-flying aspects. It is designed to

  4. Promoting country ownership and stewardship of health programs: The global fund experience.

    PubMed

    Atun, Rifat; Kazatchkine, Michel

    2009-11-01

    The Global Fund to Fight AIDS, Tuberculosis and Malaria was established in 2002 to provide large-scale financing to middle- and low-income countries to intensify the fight against the 3 diseases. Its model has enabled strengthening of local health leadership to improve governance of HIV programs in 5 ways. First, the Global Fund has encouraged development of local capacity to generate technically sound proposals reflecting country needs and priorities. Second, through dual-track financing-where countries are encouraged to nominate at least one government and one nongovernment principal recipient to lead program implementation-the Global Fund has enabled civil society and other nongovernmental organizations to play a critical role in the design, implementation, and oversight of HIV programs. Third, investments to strengthen community systems have enabled greater involvement of community leaders in effective mobilization of demand and scale-up for services to reach vulnerable groups. Fourth, capacity building outside the state sector has improved community participation in governance of public health. Finally, an emphasis on inclusiveness and diversity in planning, implementation, and oversight has broadly enhanced country coordination capacity. Strengthening local leadership capacity and governance are critical to building efficient and equitable health systems to deliver universal coverage of HIV services.

  5. The Stellar Imager (SI) Mission Concept

    NASA Technical Reports Server (NTRS)

    Carpenter, Kenneth G.; Schrijver, Carolus J.; Lyon, Richard G.; Mundy, Lee G.; Allen, Ronald J.; Armstrong, Thomas; Danchi, William C.; Karovska, Margarita; Marzouk, Joe; Mazzuca, Lisa M.; hide

    2002-01-01

    The Stellar Imager (SI) is envisioned as a space-based, UV-optical interferometer composed of 10 or more one-meter class elements distributed with a maximum baseline of 0.5 km. It is designed to image stars and binaries with sufficient resolution to enable long-term studies of stellar magnetic activity patterns, for comparison with those on the sun. It will also support asteroseismology (acoustic imaging) to probe stellar internal structure, differential rotation, and large-scale circulations. SI will enable us to understand the various effects of the magnetic fields of stars, the dynamos that generate these fields, and the internal structure and dynamics of the stars. The ultimate goal of the mission is to achieve the best-possible forecasting of solar activity as a driver of climate and space weather on time scales ranging from months up to decades, and an understanding of the impact of stellar magnetic activity on life in the Universe. In this paper we describe the scientific goals of the mission, the performance requirements needed to address these goals, the "enabling technology" development efforts being pursued, and the design concepts now under study for the full mission and a possible pathfinder mission.

  6. Systematic methods for defining coarse-grained maps in large biomolecules.

    PubMed

    Zhang, Zhiyong

    2015-01-01

    Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.

  7. A Toolkit for bulk PCR-based marker design from next-generation sequence data: application for development of a framework linkage map in bulb onion (Allium cepa L.)

    PubMed Central

    2012-01-01

    Background Although modern sequencing technologies permit the ready detection of numerous DNA sequence variants in any organisms, converting such information to PCR-based genetic markers is hampered by a lack of simple, scalable tools. Onion is an example of an under-researched crop with a complex, heterozygous genome where genome-based research has previously been hindered by limited sequence resources and genetic markers. Results We report the development of generic tools for large-scale web-based PCR-based marker design in the Galaxy bioinformatics framework, and their application for development of next-generation genetics resources in a wide cross of bulb onion (Allium cepa L.). Transcriptome sequence resources were developed for the homozygous doubled-haploid bulb onion line ‘CUDH2150’ and the genetically distant Indian landrace ‘Nasik Red’, using 454™ sequencing of normalised cDNA libraries of leaf and shoot. Read mapping of ‘Nasik Red’ reads onto ‘CUDH2150’ assemblies revealed 16836 indel and SNP polymorphisms that were mined for portable PCR-based marker development. Tools for detection of restriction polymorphisms and primer set design were developed in BioPython and adapted for use in the Galaxy workflow environment, enabling large-scale and targeted assay design. Using PCR-based markers designed with these tools, a framework genetic linkage map of over 800cM spanning all chromosomes was developed in a subset of 93 F2 progeny from a very large F2 family developed from the ‘Nasik Red’ x ‘CUDH2150’ inter-cross. The utility of tools and genetic resources developed was tested by designing markers to transcription factor-like polymorphic sequences. Bin mapping these markers using a subset of 10 progeny confirmed the ability to place markers within 10 cM bins, enabling increased efficiency in marker assignment and targeted map refinement. The major genetic loci conditioning red bulb colour (R) and fructan content (Frc) were located on this map by QTL analysis. Conclusions The generic tools developed for the Galaxy environment enable rapid development of sets of PCR assays targeting sequence variants identified from Illumina and 454 sequence data. They enable non-specialist users to validate and exploit large volumes of next-generation sequence data using basic equipment. PMID:23157543

  8. A toolkit for bulk PCR-based marker design from next-generation sequence data: application for development of a framework linkage map in bulb onion (Allium cepa L.).

    PubMed

    Baldwin, Samantha; Revanna, Roopashree; Thomson, Susan; Pither-Joyce, Meeghan; Wright, Kathryn; Crowhurst, Ross; Fiers, Mark; Chen, Leshi; Macknight, Richard; McCallum, John A

    2012-11-19

    Although modern sequencing technologies permit the ready detection of numerous DNA sequence variants in any organisms, converting such information to PCR-based genetic markers is hampered by a lack of simple, scalable tools. Onion is an example of an under-researched crop with a complex, heterozygous genome where genome-based research has previously been hindered by limited sequence resources and genetic markers. We report the development of generic tools for large-scale web-based PCR-based marker design in the Galaxy bioinformatics framework, and their application for development of next-generation genetics resources in a wide cross of bulb onion (Allium cepa L.). Transcriptome sequence resources were developed for the homozygous doubled-haploid bulb onion line 'CUDH2150' and the genetically distant Indian landrace 'Nasik Red', using 454™ sequencing of normalised cDNA libraries of leaf and shoot. Read mapping of 'Nasik Red' reads onto 'CUDH2150' assemblies revealed 16836 indel and SNP polymorphisms that were mined for portable PCR-based marker development. Tools for detection of restriction polymorphisms and primer set design were developed in BioPython and adapted for use in the Galaxy workflow environment, enabling large-scale and targeted assay design. Using PCR-based markers designed with these tools, a framework genetic linkage map of over 800cM spanning all chromosomes was developed in a subset of 93 F(2) progeny from a very large F(2) family developed from the 'Nasik Red' x 'CUDH2150' inter-cross. The utility of tools and genetic resources developed was tested by designing markers to transcription factor-like polymorphic sequences. Bin mapping these markers using a subset of 10 progeny confirmed the ability to place markers within 10 cM bins, enabling increased efficiency in marker assignment and targeted map refinement. The major genetic loci conditioning red bulb colour (R) and fructan content (Frc) were located on this map by QTL analysis. The generic tools developed for the Galaxy environment enable rapid development of sets of PCR assays targeting sequence variants identified from Illumina and 454 sequence data. They enable non-specialist users to validate and exploit large volumes of next-generation sequence data using basic equipment.

  9. The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  10. Probing aerosol indirect effect on deep convection using idealized cloud-resolving simulations with parameterized large-scale dynamics.

    NASA Astrophysics Data System (ADS)

    Anber, U.; Wang, S.; Gentine, P.; Jensen, M. P.

    2017-12-01

    A framework is introduced to investigate the indirect impact of aerosol loading on tropical deep convection using 3-dimentional idealized cloud-system resolving simulations with coupled large-scale circulation. The large scale dynamics is parameterized using a spectral weak temperature gradient approximation that utilizes the dominant balance in the tropics between adiabatic cooling and diabatic heating. Aerosol loading effect is examined by varying the number concentration of nuclei (CCN) to form cloud droplets in the bulk microphysics scheme over a wide range from 30 to 5000 without including any radiative effect as the radiative cooling is prescribed at a constant rate, to isolate the microphysical effect. Increasing aerosol number concentration causes mean precipitation to decrease monotonically, despite the increase in cloud condensates. Such reduction in precipitation efficiency is attributed to reduction in the surface enthalpy fluxes, and not to the divergent circulation, as the gross moist stability remains unchanged. We drive a simple scaling argument based on the moist static energy budget, that enables a direct estimation of changes in precipitation given known changes in surfaces enthalpy fluxes and the constant gross moist stability. The impact on cloud hydrometers and microphysical properties is also examined and is consistent with the macro-physical picture.

  11. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  12. Statistical analysis of kinetic energy entrainment in a model wind turbine array boundary layer

    NASA Astrophysics Data System (ADS)

    Cal, Raul Bayoan; Hamilton, Nicholas; Kang, Hyung-Suk; Meneveau, Charles

    2012-11-01

    For large wind farms, kinetic energy must be entrained from the flow above the wind turbines to replenish wakes and enable power extraction in the array. Various statistical features of turbulence causing vertical entrainment of mean-flow kinetic energy are studied using hot-wire velocimetry data taken in a model wind farm in a scaled wind tunnel experiment. Conditional statistics and spectral decompositions are employed to characterize the most relevant turbulent flow structures and determine their length-scales. Sweep and ejection events are shown to be the largest contributors to the vertical kinetic energy flux, although their relative contribution depends upon the location in the wake. Sweeps are shown to be dominant in the region above the wind turbine array. A spectral analysis of the data shows that large scales of the flow, about the size of the rotor diameter in length or larger, dominate the vertical entrainment. The flow is more incoherent below the array, causing decreased vertical fluxes there. The results show that improving the rate of vertical kinetic energy entrainment into wind turbine arrays is a standing challenge and would require modifying the large-scale structures of the flow. This work was funded in part by the National Science Foundation (CBET-0730922, CBET-1133800 and CBET-0953053).

  13. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  14. Lagrangian-averaged model for magnetohydrodynamic turbulence and the absence of bottlenecks.

    PubMed

    Pietarila Graham, Jonathan; Mininni, Pablo D; Pouquet, Annick

    2009-07-01

    We demonstrate that, for the case of quasiequipartition between the velocity and the magnetic field, the Lagrangian-averaged magnetohydrodynamics (LAMHD) alpha model reproduces well both the large-scale and the small-scale properties of turbulent flows; in particular, it displays no increased (superfilter) bottleneck effect with its ensuing enhanced energy spectrum at the onset of the subfilter scales. This is in contrast to the case of the neutral fluid in which the Lagrangian-averaged Navier-Stokes alpha model is somewhat limited in its applications because of the formation of spatial regions with no internal degrees of freedom and subsequent contamination of superfilter-scale spectral properties. We argue that, as the Lorentz force breaks the conservation of circulation and enables spectrally nonlocal energy transfer (associated with Alfvén waves), it is responsible for the absence of a viscous bottleneck in magnetohydrodynamics (MHD), as compared to the fluid case. As LAMHD preserves Alfvén waves and the circulation properties of MHD, there is also no (superfilter) bottleneck found in LAMHD, making this method capable of large reductions in required numerical degrees of freedom; specifically, we find a reduction factor of approximately 200 when compared to a direct numerical simulation on a large grid of 1536;{3} points at the same Reynolds number.

  15. Space Technology 5 Multipoint Observations of Temporal and Spatial Variability of Field-Aligned Currents

    NASA Technical Reports Server (NTRS)

    Le, G.; Wang, Y.; Slavin, J. A.; Strangeway, R. L.

    2009-01-01

    Space Technology 5 (ST5) is a constellation mission consisting of three microsatellites. It provides the first multipoint magnetic field measurements in low Earth orbit, which enables us to separate spatial and temporal variations. In this paper, we present a study of the temporal variability of field-aligned currents using the ST5 data. We examine the field-aligned current observations during and after a geomagnetic storm and compare the magnetic field profiles at the three spacecraft. The multipoint data demonstrate that mesoscale current structures, commonly embedded within large-scale current sheets, are very dynamic with highly variable current density and/or polarity in approx.10 min time scales. On the other hand, the data also show that the time scales for the currents to be relatively stable are approx.1 min for mesoscale currents and approx.10 min for large-scale currents. These temporal features are very likely associated with dynamic variations of their charge carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of mesoscale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  16. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  17. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  18. Effect of Carboxymethylation on the Rheological Properties of Hyaluronan

    PubMed Central

    Wendling, Rian J.; Christensen, Amanda M.; Quast, Arthur D.; Atzet, Sarah K.; Mann, Brenda K.

    2016-01-01

    Chemical modifications made to hyaluronan to enable covalent crosslinking to form a hydrogel or to attach other molecules may alter the physical properties as well, which have physiological importance. Here we created carboxymethyl hyaluronan (CMHA) with varied degree of modification and investigated the effect on the viscosity of CMHA solutions. Viscosity decreased initially as modification increased, with a minimum viscosity for about 30–40% modification. This was followed by an increase in viscosity around 45–50% modification. The pH of the solution had a variable effect on viscosity, depending on the degree of carboxymethyl modification and buffer. The presence of phosphates in the buffer led to decreased viscosity. We also compared large-scale production lots of CMHA to lab-scale and found that large-scale required extended reaction times to achieve the same degree of modification. Finally, thiolated CMHA was disulfide crosslinked to create hydrogels with increased viscosity and shear-thinning aspects compared to CMHA solutions. PMID:27611817

  19. Energy harvesting: small scale energy production from ambient sources

    NASA Astrophysics Data System (ADS)

    Yeatman, Eric M.

    2009-03-01

    Energy harvesting - the collection of otherwise unexploited energy in the local environment - is attracting increasing attention for the powering of electronic devices. While the power levels that can be reached are typically modest (microwatts to milliwatts), the key motivation is to avoid the need for battery replacement or recharging in portable or inaccessible devices. Wireless sensor networks are a particularly important application: the availability of essentially maintenance free sensor nodes, as enabled by energy harvesting, will greatly increase the feasibility of large scale networks, in the paradigm often known as pervasive sensing. Such pervasive sensing networks, used to monitor buildings, structures, outdoor environments or the human body, offer significant benefits for large scale energy efficiency, health and safety, and many other areas. Sources of energy for harvesting include light, temperature differences, and ambient motion, and a wide range of miniature energy harvesters based on these sources have been proposed or demonstrated. This paper reviews the principles and practice in miniature energy harvesters, and discusses trends, suitable applications, and possible future developments.

  20. Nanomanufacturing : nano-structured materials made layer-by-layer.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, James V.; Cheng, Shengfeng; Grest, Gary Stephen

    Large-scale, high-throughput production of nano-structured materials (i.e. nanomanufacturing) is a strategic area in manufacturing, with markets projected to exceed $1T by 2015. Nanomanufacturing is still in its infancy; process/product developments are costly and only touch on potential opportunities enabled by growing nanoscience discoveries. The greatest promise for high-volume manufacturing lies in age-old coating and imprinting operations. For materials with tailored nm-scale structure, imprinting/embossing must be achieved at high speeds (roll-to-roll) and/or over large areas (batch operation) with feature sizes less than 100 nm. Dispersion coatings with nanoparticles can also tailor structure through self- or directed-assembly. Layering films structured with thesemore » processes have tremendous potential for efficient manufacturing of microelectronics, photovoltaics and other topical nano-structured devices. This project is designed to perform the requisite R and D to bring Sandia's technology base in computational mechanics to bear on this scale-up problem. Project focus is enforced by addressing a promising imprinting process currently being commercialized.« less

  1. Beyond δ: Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.

  2. Translational bioinformatics in the cloud: an affordable alternative

    PubMed Central

    2010-01-01

    With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073

  3. Recent developments in microfluidic large scale integration.

    PubMed

    Araci, Ismail Emre; Brisk, Philip

    2014-02-01

    In 2002, Thorsen et al. integrated thousands of micromechanical valves on a single microfluidic chip and demonstrated that the control of the fluidic networks can be simplified through multiplexors [1]. This enabled realization of highly parallel and automated fluidic processes with substantial sample economy advantage. Moreover, the fabrication of these devices by multilayer soft lithography was easy and reliable hence contributed to the power of the technology; microfluidic large scale integration (mLSI). Since then, mLSI has found use in wide variety of applications in biology and chemistry. In the meantime, efforts to improve the technology have been ongoing. These efforts mostly focus on; novel materials, components, micromechanical valve actuation methods, and chip architectures for mLSI. In this review, these technological advances are discussed and, recent examples of the mLSI applications are summarized. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  5. A scalable multi-photon coincidence detector based on superconducting nanowires.

    PubMed

    Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K

    2018-06-04

    Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.

  6. A Prominence Puzzle Explained?

    NASA Astrophysics Data System (ADS)

    Yeates, A. R.; Mackay, D. H.; van Ballegooijen, A. A.

    2009-02-01

    Long-standing observations reveal a global organisation of the magnetic field direction in solar prominences (aka filaments), large clouds of cool dense plasma suspended in the Sun's hot corona. However, theorists have thus far been unable to explain the origin of this hemispheric pattern. In particular, simple shearing by large-scale surface motions would appear to lead to the wrong magnetic field direction. To explain the observations, we have developed a new model of the global magnetic field evolution in the solar corona over six months. For the first time our model can follow the build-up of magnetic helicity and shear on a global scale, driven by flux emergence and surface motions. The model is successful in predicting the correct magnetic field direction in the vast majority of prominences tested, and has enabled us to determine the key physical mechanisms behind the mysterious hemispheric pattern.

  7. Plasmonic resonances of nanoparticles from large-scale quantum mechanical simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Xiang, Hongping; Zhang, Mingliang; Lu, Gang

    2017-09-01

    Plasmonic resonance of metallic nanoparticles results from coherent motion of its conduction electrons, driven by incident light. For the nanoparticles less than 10 nm in diameter, localized surface plasmonic resonances become sensitive to the quantum nature of the conduction electrons. Unfortunately, quantum mechanical simulations based on time-dependent Kohn-Sham density functional theory are computationally too expensive to tackle metal particles larger than 2 nm. Herein, we introduce the recently developed time-dependent orbital-free density functional theory (TD-OFDFT) approach which enables large-scale quantum mechanical simulations of plasmonic responses of metallic nanostructures. Using TD-OFDFT, we have performed quantum mechanical simulations to understand size-dependent plasmonic response of Na nanoparticles and plasmonic responses in Na nanoparticle dimers and trimers. An outlook of future development of the TD-OFDFT method is also presented.

  8. Large-eddy simulation of turbulent flow with a surface-mounted two-dimensional obstacle

    NASA Technical Reports Server (NTRS)

    Yang, Kyung-Soo; Ferziger, Joel H.

    1993-01-01

    In this paper, we perform a large eddy simulation (LES) of turbulent flow in a channel containing a two-dimensional obstacle on one wall using a dynamic subgrid-scale model (DSGSM) at Re = 3210, based on bulk velocity above the obstacle and obstacle height; the wall layers are fully resolved. The low Re enables us to perform a DNS (Case 1) against which to validate the LES results. The LES with the DSGSM is designated Case 2. In addition, an LES with the conventional fixed model constant (Case 3) is conducted to allow identification of improvements due to the DSGSM. We also include LES at Re = 82,000 (Case 4) using conventional Smagorinsky subgrid-scale model and a wall-layer model. The results will be compared with the experiment of Dimaczek et al.

  9. Visual Systems for Interactive Exploration and Mining of Large-Scale Neuroimaging Data Archives

    PubMed Central

    Bowman, Ian; Joshi, Shantanu H.; Van Horn, John D.

    2012-01-01

    While technological advancements in neuroimaging scanner engineering have improved the efficiency of data acquisition, electronic data capture methods will likewise significantly expedite the populating of large-scale neuroimaging databases. As they do and these archives grow in size, a particular challenge lies in examining and interacting with the information that these resources contain through the development of compelling, user-driven approaches for data exploration and mining. In this article, we introduce the informatics visualization for neuroimaging (INVIZIAN) framework for the graphical rendering of, and dynamic interaction with the contents of large-scale neuroimaging data sets. We describe the rationale behind INVIZIAN, detail its development, and demonstrate its usage in examining a collection of over 900 T1-anatomical magnetic resonance imaging (MRI) image volumes from across a diverse set of clinical neuroimaging studies drawn from a leading neuroimaging database. Using a collection of cortical surface metrics and means for examining brain similarity, INVIZIAN graphically displays brain surfaces as points in a coordinate space and enables classification of clusters of neuroanatomically similar MRI images and data mining. As an initial step toward addressing the need for such user-friendly tools, INVIZIAN provides a highly unique means to interact with large quantities of electronic brain imaging archives in ways suitable for hypothesis generation and data mining. PMID:22536181

  10. A New Method for Rapid Screening of End-Point PCR Products: Application to Single Genome Amplified HIV and SIV Envelope Amplicons

    PubMed Central

    Houzet, Laurent; Deleage, Claire; Satie, Anne-Pascale; Merlande, Laetitia; Mahe, Dominique; Dejucq-Rainsford, Nathalie

    2015-01-01

    PCR is the most widely applied technique for large scale screening of bacterial clones, mouse genotypes, virus genomes etc. A drawback of large PCR screening is that amplicon analysis is usually performed using gel electrophoresis, a step that is very labor intensive, tedious and chemical waste generating. Single genome amplification (SGA) is used to characterize the diversity and evolutionary dynamics of virus populations within infected hosts. SGA is based on the isolation of single template molecule using limiting dilution followed by nested PCR amplification and requires the analysis of hundreds of reactions per sample, making large scale SGA studies very challenging. Here we present a novel approach entitled Long Amplicon Melt Profiling (LAMP) based on the analysis of the melting profile of the PCR reactions using SYBR Green and/or EvaGreen fluorescent dyes. The LAMP method represents an attractive alternative to gel electrophoresis and enables the quick discrimination of positive reactions. We validate LAMP for SIV and HIV env-SGA, in 96- and 384-well plate formats. Because the melt profiling allows the screening of several thousands of PCR reactions in a cost-effective, rapid and robust way, we believe it will greatly facilitate any large scale PCR screening. PMID:26053379

  11. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  12. Reverse engineering and analysis of large genome-scale gene networks

    PubMed Central

    Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas

    2013-01-01

    Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249

  13. Community health worker programs in India: a rights-based review.

    PubMed

    Bhatia, Kavita

    2014-09-01

    This article presents a historical review of national community health worker (CHW) programs in India using a gender- and rights-based lens. The aim is to derive relevant policy implications to stem attrition and enable sustenance of large-scale CHW programs. For the literature review, relevant government policies, minutes of meetings, reports, newspaper articles and statistics were accessed through official websites and a hand search was conducted for studies on the rights-based aspects of large-scale CHW programs. The analysis shows that the CHWs in three successive Indian national CHW programs have consistently asked for reforms in their service conditions, including increased remuneration. Despite an evolution in stakeholder perspectives regarding the rights of CHWs, service reforms are slow. Performance-based payments do not provide the financial security expected by CHWs as demonstrated in the recent Accredited Social Health Activist (ASHA) program. In most countries, CHWs, who are largely women, have never been integrated into the established, salaried team of health system workers. The two hallmark characteristics of CHWs, namely, their volunteer status and the flexibility of their tasks and timings, impede their rights. The consequences of initiating or neglecting standardization should be considered by all countries with large-scale CHW programs like the ASHA program. © Royal Society for Public Health 2014.

  14. Current challenges in quantifying preferential flow through the vadose zone

    NASA Astrophysics Data System (ADS)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  15. Public-Private Partnership: Joint recommendations to improve downloads of large Earth observation data

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Murphy, K. J.; Baynes, K.; Lynnes, C.

    2016-12-01

    With the volume of Earth observation data expanding rapidly, cloud computing is quickly changing the way Earth observation data is processed, analyzed, and visualized. The cloud infrastructure provides the flexibility to scale up to large volumes of data and handle high velocity data streams efficiently. Having freely available Earth observation data collocated on a cloud infrastructure creates opportunities for innovation and value-added data re-use in ways unforeseen by the original data provider. These innovations spur new industries and applications and spawn new scientific pathways that were previously limited due to data volume and computational infrastructure issues. NASA, in collaboration with Amazon, Google, and Microsoft, have jointly developed a set of recommendations to enable efficient transfer of Earth observation data from existing data systems to a cloud computing infrastructure. The purpose of these recommendations is to provide guidelines against which all data providers can evaluate existing data systems and be used to improve any issues uncovered to enable efficient search, access, and use of large volumes of data. Additionally, these guidelines ensure that all cloud providers utilize a common methodology for bulk-downloading data from data providers thus preventing the data providers from building custom capabilities to meet the needs of individual cloud providers. The intent is to share these recommendations with other Federal agencies and organizations that serve Earth observation to enable efficient search, access, and use of large volumes of data. Additionally, the adoption of these recommendations will benefit data users interested in moving large volumes of data from data systems to any other location. These data users include the cloud providers, cloud users such as scientists, and other users working in a high performance computing environment who need to move large volumes of data.

  16. Behind the Scenes with OpenLearn: The Challenges of Researching the Provision of Open Educational Resources

    ERIC Educational Resources Information Center

    Godwin, Stephen; McAndrew, Patrick; Santos, Andreia

    2008-01-01

    Web-enabled technology is now being applied on a large scale. In this paper we look at open access provision of teaching and learning leading to many users with varying patterns and motivations for use. This has provided us with a research challenge to find methods that help us understand and explain such initiatives. We describe ways to model the…

  17. Unmanned Aircraft Systems Traffic Management (UTM)

    NASA Technical Reports Server (NTRS)

    Johnson, Ronald D.

    2018-01-01

    UTM is an 'air traffic management' ecosystem for uncontrolled operations. UTM utilizes industry's ability to supply services under FAA's regulatory authority where these services do not exist. UTM development will ultimately enable the management of large scale, low-altitude UAS operations. Operational concept will address beyond visual line of sight UAS operations under 400 ft. AGL. Information architecture, data exchange protocols, software functions. Roles/responsibilities of FAA and operators. Performance requirements.

  18. Dual redox catalysts for oxygen reduction and evolution reactions: towards a redox flow Li-O2 battery.

    PubMed

    Zhu, Yun Guang; Jia, Chuankun; Yang, Jing; Pan, Feng; Huang, Qizhao; Wang, Qing

    2015-06-11

    A redox flow lithium-oxygen battery (RFLOB) by using soluble redox catalysts with good performance was demonstrated for large-scale energy storage. The new device enables the reversible formation and decomposition of Li2O2 via redox targeting reactions in a gas diffusion tank, spatially separated from the electrode, which obviates the passivation and pore clogging of the cathode.

  19. Design and Analysis of a Hyperspectral Microwave Receiver Subsystem

    NASA Technical Reports Server (NTRS)

    Blackwell, W.; Galbraith, C.; Hancock, T.; Leslie, R.; Osaretin, I.; Shields, M.; Racette, P.; Hillard, L.

    2012-01-01

    Hyperspectral microwave (HM) sounding has been proposed to achieve unprecedented performance. HM operation is achieved using multiple banks of RF spectrometers with large aggregate bandwidth. A principal challenge is Size/Weight/Power scaling. Objectives of this work: 1) Demonstrate ultra-compact (100 cm3) 52-channel IF processor (enabler); 2) Demonstrate a hyperspectral microwave receiver subsystem; and 3) Deliver a flight-ready system to validate HM sounding.

  20. Data Intensive Scientific Workflows on a Federated Cloud: CRADA Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    The Fermilab Scientific Computing Division and the KISTI Global Science Experimental Data Hub Center have built a prototypical large-scale infrastructure to handle scientific workflows of stakeholders to run on multiple cloud resources. The demonstrations have been in the areas of (a) Data-Intensive Scientific Workflows on Federated Clouds, (b) Interoperability and Federation of Cloud Resources, and (c) Virtual Infrastructure Automation to enable On-Demand Services.

  1. The Tomographic Ionized-Carbon Mapping Experiment (TIME) CII Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Staniszewski, Z.; Bock, J. J.; Bradford, C. M.; Brevik, J.; Cooray, A.; Gong, Y.; Hailey-Dunsheath, S.; O'Brient, R.; Santos, M.; Shirokoff, E.; Silva, M.; Zemcov, M.

    2014-09-01

    The Tomographic Ionized-Carbon Mapping Experiment (TIME) and TIME-Pilot are proposed imaging spectrometers to measure reionization and large scale structure at redshifts 5-9. We seek to exploit the 158 restframe emission of [CII], which becomes measurable at 200-300 GHz at reionization redshifts. Here we describe the scientific motivation, give an overview of the proposed instrument, and highlight key technological developments underway to enable these measurements.

  2. Clinical benchmarking enabled by the digital health record.

    PubMed

    Ricciardi, T N; Masarie, F E; Middleton, B

    2001-01-01

    Office-based physicians are often ill equipped to report aggregate information about their patients and practice of medicine, since their practices have relied upon paper records for the management of clinical information. Physicians who do not have access to large-scale information technology support can now benefit from low-cost clinical documentation and reporting tools. We developed a hosted clinical data mart for users of a web-enabled charting tool, targeting the solo or small group practice. The system uses secure Java Server Pages with a dashboard-like menu to provide point-and-click access to simple reports such as case mix, medications, utilization, productivity, and patient demographics in its first release. The system automatically normalizes user-entered clinical terms to enhance the quality of structured data. Individual providers benefit from rapid patient identification for disease management, quality of care self-assessments, drug recalls, and compliance with clinical guidelines. The system provides knowledge integration by linking to trusted sources of online medical information in context. Information derived from the clinical record is clinically more accurate than billing data. Provider self-assessment and benchmarking empowers physicians, who may resent "being profiled" by external entities. In contrast to large-scale data warehouse projects, the current system delivers immediate value to individual physicians who choose an electronic clinical documentation tool.

  3. A new high-energy cathode for a Na-ion battery with ultrahigh stability.

    PubMed

    Park, Young-Uk; Seo, Dong-Hwa; Kwon, Hyung-Soon; Kim, Byoungkook; Kim, Jongsoon; Kim, Haegyeom; Kim, Inkyung; Yoo, Han-Ill; Kang, Kisuk

    2013-09-18

    Large-scale electric energy storage is a key enabler for the use of renewable energy. Recently, the room-temperature Na-ion battery has been rehighlighted as an alternative low-cost technology for this application. However, significant challenges such as energy density and long-term stability must be addressed. Herein, we introduce a novel cathode material, Na1.5VPO4.8F0.7, for Na-ion batteries. This new material provides an energy density of ~600 Wh kg(-1), the highest value among cathodes, originating from both the multielectron redox reaction (1.2 e(-) per formula unit) and the high potential (~3.8 V vs Na(+)/Na) of the tailored vanadium redox couple (V(3.8+)/V(5+)). Furthermore, an outstanding cycle life (~95% capacity retention for 100 cycles and ~84% for extended 500 cycles) could be achieved, which we attribute to the small volume change (2.9%) upon cycling, the smallest volume change among known Na intercalation cathodes. The open crystal framework with two-dimensional Na diffusional pathways leads to low activation barriers for Na diffusion, enabling excellent rate capability. We believe that this new material can bring the low-cost room-temperature Na-ion battery a step closer to a sustainable large-scale energy storage system.

  4. Human Genomic Loci Important in Common Infectious Diseases: Role of High-Throughput Sequencing and Genome-Wide Association Studies

    PubMed Central

    Sserwadda, Ivan; Amujal, Marion; Namatovu, Norah

    2018-01-01

    HIV/AIDS, tuberculosis (TB), and malaria are 3 major global public health threats that undermine development in many resource-poor settings. Recently, the notion that positive selection during epidemics or longer periods of exposure to common infectious diseases may have had a major effect in modifying the constitution of the human genome is being interrogated at a large scale in many populations around the world. This positive selection from infectious diseases increases power to detect associations in genome-wide association studies (GWASs). High-throughput sequencing (HTS) has transformed both the management of infectious diseases and continues to enable large-scale functional characterization of host resistance/susceptibility alleles and loci; a paradigm shift from single candidate gene studies. Application of genome sequencing technologies and genomics has enabled us to interrogate the host-pathogen interface for improving human health. Human populations are constantly locked in evolutionary arms races with pathogens; therefore, identification of common infectious disease-associated genomic variants/markers is important in therapeutic, vaccine development, and screening susceptible individuals in a population. This review describes a range of host-pathogen genomic loci that have been associated with disease susceptibility and resistant patterns in the era of HTS. We further highlight potential opportunities for these genetic markers. PMID:29755620

  5. Enabling Large-Scale IoT-Based Services through Elastic Publish/Subscribe.

    PubMed

    Vavassori, Sergio; Soriano, Javier; Fernández, Rafael

    2017-09-19

    In this paper, we report an algorithm that is designed to leverage the cloud as infrastructure to support Internet of Things (IoT) by elastically scaling in/out so that IoT-based service users never stop receiving sensors' data. This algorithm is able to provide an uninterrupted service to end users even during the scaling operation since its internal state repartitioning is transparent for publishers or subscribers; its scaling operation is time-bounded and depends only on the dimension of the state partitions to be transmitted to the different nodes. We describe its implementation in E-SilboPS, an elastic content-based publish/subscribe (CBPS) system specifically designed to support context-aware sensing and communication in IoT-based services. E-SilboPS is a key internal asset of the FIWARE IoT services enablement platform, which offers an architecture of components specifically designed to capture data from, or act upon, IoT devices as easily as reading/changing the value of attributes linked to context entities. In addition, we discuss the quantitative measurements used to evaluate the scale-out process, as well as the results of this evaluation. This new feature rounds out the context-aware content-based features of E-SilboPS by providing, for example, the necessary middleware for constructing dashboards and monitoring panels that are capable of dynamically changing queries and continuously handling data in IoT-based services.

  6. Enabling Large-Scale IoT-Based Services through Elastic Publish/Subscribe

    PubMed Central

    2017-01-01

    In this paper, we report an algorithm that is designed to leverage the cloud as infrastructure to support Internet of Things (IoT) by elastically scaling in/out so that IoT-based service users never stop receiving sensors’ data. This algorithm is able to provide an uninterrupted service to end users even during the scaling operation since its internal state repartitioning is transparent for publishers or subscribers; its scaling operation is time-bounded and depends only on the dimension of the state partitions to be transmitted to the different nodes. We describe its implementation in E-SilboPS, an elastic content-based publish/subscribe (CBPS) system specifically designed to support context-aware sensing and communication in IoT-based services. E-SilboPS is a key internal asset of the FIWARE IoT services enablement platform, which offers an architecture of components specifically designed to capture data from, or act upon, IoT devices as easily as reading/changing the value of attributes linked to context entities. In addition, we discuss the quantitative measurements used to evaluate the scale-out process, as well as the results of this evaluation. This new feature rounds out the context-aware content-based features of E-SilboPS by providing, for example, the necessary middleware for constructing dashboards and monitoring panels that are capable of dynamically changing queries and continuously handling data in IoT-based services. PMID:28925967

  7. Impact resistant boron/aluminum composites for large fan blades

    NASA Technical Reports Server (NTRS)

    Oller, T. L.; Salemme, C. T.; Bowden, J. H.; Doble, G. S.; Melnyk, P.

    1977-01-01

    Blade-like specimens were subjected to static ballistic impact testing to determine their relative FOD impact resistance levels. It was determined that a plus or minus 15 deg layup exhibited good impact resistance. The design of a large solid boron/aluminum fan blade was conducted based on the FOD test results. The CF6 fan blade was used as a baseline for these design studies. The solid boron/aluminum fan blade design was used to fabricate two blades. This effort enabled the assessment of the scale up of existing blade manufacturing details for the fabrication of a large B/Al fan blade. Existing CF6 fan blade tooling was modified for use in fabricating these blades.

  8. Using selection bias to explain the observed structure of Internet diffusions

    PubMed Central

    Golub, Benjamin; Jackson, Matthew O.

    2010-01-01

    Recently, large datasets stored on the Internet have enabled the analysis of processes, such as large-scale diffusions of information, at new levels of detail. In a recent study, Liben-Nowell and Kleinberg [(2008) Proc Natl Acad Sci USA 105:4633–4638] observed that the flow of information on the Internet exhibits surprising patterns whereby a chain letter reaches its typical recipient through long paths of hundreds of intermediaries. We show that a basic Galton–Watson epidemic model combined with the selection bias of observing only large diffusions suffices to explain these patterns. Thus, selection biases of which data we observe can radically change the estimation of classical diffusion processes. PMID:20534439

  9. Integrated nanoscale tools for interrogating living cells

    NASA Astrophysics Data System (ADS)

    Jorgolli, Marsela

    The development of next-generation, nanoscale technologies that interface biological systems will pave the way towards new understanding of such complex systems. Nanowires -- one-dimensional nanoscale structures -- have shown unique potential as an ideal physical interface to biological systems. Herein, we focus on the development of nanowire-based devices that can enable a wide variety of biological studies. First, we built upon standard nanofabrication techniques to optimize nanowire devices, resulting in perfectly ordered arrays of both opaque (Silicon) and transparent (Silicon dioxide) nanowires with user defined structural profile, densities, and overall patterns, as well as high sample consistency and large scale production. The high-precision and well-controlled fabrication method in conjunction with additional technologies laid the foundation for the generation of highly specialized platforms for imaging, electrochemical interrogation, and molecular biology. Next, we utilized nanowires as the fundamental structure in the development of integrated nanoelectronic platforms to directly interrogate the electrical activity of biological systems. Initially, we generated a scalable intracellular electrode platform based on vertical nanowires that allows for parallel electrical interfacing to multiple mammalian neurons. Our prototype device consisted of 16 individually addressable stimulation/recording sites, each containing an array of 9 electrically active silicon nanowires. We showed that these vertical nanowire electrode arrays could intracellularly record and stimulate neuronal activity in dissociated cultures of rat cortical neurons similar to patch clamp electrodes. In addition, we used our intracellular electrode platform to measure multiple individual synaptic connections, which enables the reconstruction of the functional connectivity maps of neuronal circuits. In order to expand and improve the capability of this functional prototype device we designed and fabricated a new hybrid chip that combines a front-side nanowire-based interface for neuronal recording with backside complementary metal oxide semiconductor (CMOS) circuits for on-chip multiplexing, voltage control for stimulation, signal amplification, and signal processing. Individual chips contain 1024 stimulation/recording sites enabling large-scale interfacing of neuronal networks with single cell resolution. Through electrical and electrochemical characterization of the devices, we demonstrated their enhanced functionality at a massively parallel scale. In our initial cell experiments, we achieved intracellular stimulations and recordings of changes in the membrane potential in a variety of cells including: HEK293T, cardiomyocytes, and rat cortical neurons. This demonstrated the device capability for single-cell-resolution recording/stimulation which when extended to a large number of neurons in a massively parallel fashion will enable the functional mapping of a complex neuronal network.

  10. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  11. Large-scale atomistic simulations of helium-3 bubble growth in complex palladium alloys

    DOE PAGES

    Hale, Lucas M.; Zimmerman, Jonathan A.; Wong, Bryan M.

    2016-05-18

    Palladium is an attractive material for hydrogen and hydrogen-isotope storage applications due to its properties of large storage density and high diffusion of lattice hydrogen. When considering tritium storage, the material’s structural and mechanical integrity is threatened by both the embrittlement effect of hydrogen and the creation and evolution of additional crystal defects (e.g., dislocations, stacking faults) caused by the formation and growth of helium-3 bubbles. Using recently developed inter-atomic potentials for the palladium-silver-hydrogen system, we perform large-scale atomistic simulations to examine the defect-mediated mechanisms that govern helium bubble growth. Our simulations show the evolution of a distribution of materialmore » defects, and we compare the material behavior displayed with expectations from experiment and theory. In conclusion, we also present density functional theory calculations to characterize ideal tensile and shear strengths for these materials, which enable the understanding of how and why our developed potentials either meet or confound these expectations.« less

  12. Effects of numerical dissipation and unphysical excursions on scalar-mixing estimates in large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Sharan, Nek; Matheou, Georgios; Dimotakis, Paul

    2017-11-01

    Artificial numerical dissipation decreases dispersive oscillations and can play a key role in mitigating unphysical scalar excursions in large eddy simulations (LES). Its influence on scalar mixing can be assessed through the resolved-scale scalar, Z , its probability density function (PDF), variance, spectra, and the budget of the horizontally averaged equation for Z2. LES of incompressible temporally evolving shear flow enabled us to study the influence of numerical dissipation on unphysical scalar excursions and mixing estimates. Flows with different mixing behavior, with both marching and non-marching scalar PDFs, are studied. Scalar fields for each flow are compared for different grid resolutions and numerical scalar-convection term schemes. As expected, increasing numerical dissipation enhances scalar mixing in the development stage of shear flow characterized by organized large-scale pairings with a non-marching PDF, but has little influence in the self-similar stage of flows with marching PDFs. Flow parameters and regimes sensitive to numerical dissipation help identify approaches to mitigate unphysical excursions while minimizing dissipation.

  13. Mantis: A Fast, Small, and Exact Large-Scale Sequence-Search Index.

    PubMed

    Pandey, Prashant; Almodaresi, Fatemeh; Bender, Michael A; Ferdman, Michael; Johnson, Rob; Patro, Rob

    2018-06-18

    Sequence-level searches on large collections of RNA sequencing experiments, such as the NCBI Sequence Read Archive (SRA), would enable one to ask many questions about the expression or variation of a given transcript in a population. Existing approaches, such as the sequence Bloom tree, suffer from fundamental limitations of the Bloom filter, resulting in slow build and query times, less-than-optimal space usage, and potentially large numbers of false-positives. This paper introduces Mantis, a space-efficient system that uses new data structures to index thousands of raw-read experiments and facilitates large-scale sequence searches. In our evaluation, index construction with Mantis is 6× faster and yields a 20% smaller index than the state-of-the-art split sequence Bloom tree (SSBT). For queries, Mantis is 6-108× faster than SSBT and has no false-positives or -negatives. For example, Mantis was able to search for all 200,400 known human transcripts in an index of 2,652 RNA sequencing experiments in 82 min; SSBT took close to 4 days. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  15. A Mixed-Ligand Chiral Rhodium(II) Catalyst Enables the Enantioselective Total Synthesis of Piperarborenine B.

    PubMed

    Panish, Robert A; Chintala, Srinivasa R; Fox, Joseph M

    2016-04-11

    A novel, mixed-ligand chiral rhodium(II) catalyst, Rh2(S-NTTL)3(dCPA), has enabled the first enantioselective total synthesis of the natural product piperarborenine B. A crystal structure of Rh2(S-NTTL)3(dCPA) reveals a "chiral crown" conformation with a bulky dicyclohexylphenyl acetate ligand and three N-naphthalimido groups oriented on the same face of the catalyst. The natural product was prepared on large scale using rhodium-catalyzed bicyclobutanation/ copper-catalyzed homoconjugate addition chemistry in the key step. The route proceeds in ten steps with an 8% overall yield and 92% ee. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. CloudMan as a platform for tool, data, and analysis distribution

    PubMed Central

    2012-01-01

    Background Cloud computing provides an infrastructure that facilitates large scale computational analysis in a scalable, democratized fashion, However, in this context it is difficult to ensure sharing of an analysis environment and associated data in a scalable and precisely reproducible way. Results CloudMan (usecloudman.org) enables individual researchers to easily deploy, customize, and share their entire cloud analysis environment, including data, tools, and configurations. Conclusions With the enabled customization and sharing of instances, CloudMan can be used as a platform for collaboration. The presented solution improves accessibility of cloud resources, tools, and data to the level of an individual researcher and contributes toward reproducibility and transparency of research solutions. PMID:23181507

  17. Experimental Quantification of Pore-Scale Flow Phenomena in 2D Heterogeneous Porous Micromodels: Multiphase Flow Towards Coupled Solid-Liquid Interactions

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kazemifar, F.; Blois, G.; Christensen, K. T.

    2017-12-01

    Geological sequestration of CO2 within saline aquifers is a viable technology for reducing CO2 emissions. Central to this goal is accurately predicting both the fidelity of candidate sites pre-injection of CO2 and its post-injection migration. Moreover, local fluid pressure buildup may cause activation of small pre-existing unidentified faults, leading to micro-seismic events, which could prove disastrous for societal acceptance of CCS, and possibly compromise seal integrity. Recent evidence shows that large-scale events are coupled with pore-scale phenomena, which necessitates the representation of pore-scale stress, strain, and multiphase flow processes in large-scale modeling. To this end, the pore-scale flow of water and liquid/supercritical CO2 is investigated under reservoir-relevant conditions, over a range of wettability conditions in 2D heterogeneous micromodels that reflect the complexity of a real sandstone. High-speed fluorescent microscopy, complemented by a fast differential pressure transmitter, allows for simultaneous measurement of the flow field within and the instantaneous pressure drop across the micromodels. A flexible micromodel is also designed and fabricated, to be used in conjunction with the micro-PIV technique, enabling the quantification of coupled solid-liquid interactions.

  18. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    NASA Astrophysics Data System (ADS)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  19. The Universe at Moderate Redshift

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Ostriker, Jeremiah P.

    1997-01-01

    The report covers the work done in the past year and a wide range of fields including properties of clusters of galaxies; topological properties of galaxy distributions in terms of galaxy types; patterns of gravitational nonlinear clustering process; development of a ray tracing algorithm to study the gravitational lensing phenomenon by galaxies, clusters and large-scale structure, one of whose applications being the effects of weak gravitational lensing by large-scale structure on the determination of q(0); the origin of magnetic fields on the galactic and cluster scales; the topological properties of Ly(alpha) clouds the Ly(alpha) optical depth distribution; clustering properties of Ly(alpha) clouds; and a determination (lower bound) of Omega(b) based on the observed Ly(alpha) forest flux distribution. In the coming year, we plan to continue the investigation of Ly(alpha) clouds using larger dynamic range (about a factor of two) and better simulations (with more input physics included) than what we have now. We will study the properties of galaxies on 1 - 100h(sup -1) Mpc scales using our state-of-the-art large scale galaxy formation simulations of various cosmological models, which will have a resolution about a factor of 5 (in each dimension) better than our current, best simulations. We will plan to study the properties of X-ray clusters using unprecedented, very high dynamic range (20,000) simulations which will enable us to resolve the cores of clusters while keeping the simulation volume sufficiently large to ensure a statistically fair sample of the objects of interest. The details of the last year's works are now described.

  20. Large-Scale Noniridescent Structural Color Printing Enabled by Infiltration-Driven Nonequilibrium Colloidal Assembly.

    PubMed

    Bai, Ling; Mai, Van Cuong; Lim, Yun; Hou, Shuai; Möhwald, Helmuth; Duan, Hongwei

    2018-03-01

    Structural colors originating from interaction of light with intricately arranged micro-/nanostructures have stimulated considerable interest because of their inherent photostability and energy efficiency. In particular, noniridescent structural color with wide viewing angle has been receiving increasing attention recently. However, no method is yet available for rapid and large-scale fabrication of full-spectrum structural color patterns with wide viewing angles. Here, infiltration-driven nonequilibrium assembly of colloidal particles on liquid-permeable and particle-excluding substrates is demonstrated to direct the particles to form amorphous colloidal arrays (ACAs) within milliseconds. The infiltration-assisted (IFAST) colloidal assembly opens new possibilities for rapid manufacture of noniridescent structural colors of ACAs and straightforward structural color mixing. Full-spectrum noniridescent structural colors are successfully produced by mixing primary structural colors of red, blue, and yellow using a commercial office inkjet printer. Rapid fabrication of large-scale structural color patterns with sophisticated color combination/layout by IFAST printing is realized. The IFAST technology is versatile for developing structural color patterns with wide viewing angles, as colloidal particles, inks, and substrates are flexibly designable for diverse applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Mining the Mind Research Network: A Novel Framework for Exploring Large Scale, Heterogeneous Translational Neuroscience Research Data Sources

    PubMed Central

    Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.

    2009-01-01

    A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147

  2. Orthographic and Phonological Neighborhood Databases across Multiple Languages.

    PubMed

    Marian, Viorica

    2017-01-01

    The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.

  3. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  4. Wireless in-situ Sensor Network for Agriculture and Water Monitoring on a River Basin Scale in Southern Finland: Evaluation from a Data User’s Perspective

    PubMed Central

    Kotamäki, Niina; Thessler, Sirpa; Koskiaho, Jari; Hannukkala, Asko O.; Huitu, Hanna; Huttula, Timo; Havento, Jukka; Järvenpää, Markku

    2009-01-01

    Sensor networks are increasingly being implemented for environmental monitoring and agriculture to provide spatially accurate and continuous environmental information and (near) real-time applications. These networks provide a large amount of data which poses challenges for ensuring data quality and extracting relevant information. In the present paper we describe a river basin scale wireless sensor network for agriculture and water monitoring. The network, called SoilWeather, is unique and the first of this type in Finland. The performance of the network is assessed from the user and maintainer perspectives, concentrating on data quality, network maintenance and applications. The results showed that the SoilWeather network has been functioning in a relatively reliable way, but also that the maintenance and data quality assurance by automatic algorithms and calibration samples requires a lot of effort, especially in continuous water monitoring over large areas. We see great benefits on sensor networks enabling continuous, real-time monitoring, while data quality control and maintenance efforts highlight the need for tight collaboration between sensor and sensor network owners to decrease costs and increase the quality of the sensor data in large scale applications. PMID:22574050

  5. Joint classification and contour extraction of large 3D point clouds

    NASA Astrophysics Data System (ADS)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  6. High areal capacity hybrid magnesium-lithium-ion battery with 99.9% Coulombic efficiency for large-scale energy storage.

    PubMed

    Yoo, Hyun Deog; Liang, Yanliang; Li, Yifei; Yao, Yan

    2015-04-01

    Hybrid magnesium-lithium-ion batteries (MLIBs) featuring dendrite-free deposition of Mg anode and Li-intercalation cathode are safe alternatives to Li-ion batteries for large-scale energy storage. Here we report for the first time the excellent stability of a high areal capacity MLIB cell and dendrite-free deposition behavior of Mg under high current density (2 mA cm(-2)). The hybrid cell showed no capacity loss for 100 cycles with Coulombic efficiency as high as 99.9%, whereas the control cell with a Li-metal anode only retained 30% of its original capacity with Coulombic efficiency well below 90%. The use of TiS2 as a cathode enabled the highest specific capacity and one of the best rate performances among reported MLIBs. Postmortem analysis of the cycled cells revealed dendrite-free Mg deposition on a Mg anode surface, while mossy Li dendrites were observed covering the Li surface and penetrated into separators in the Li cell. The energy density of a MLIB could be further improved by developing electrolytes with higher salt concentration and wider electrochemical window, leading to new opportunities for its application in large-scale energy storage.

  7. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  8. Predicting the size and elevation of future mountain forests: Scaling macroclimate to microclimate

    NASA Astrophysics Data System (ADS)

    Cory, S. T.; Smith, W. K.

    2017-12-01

    Global climate change is predicted to alter continental scale macroclimate and regional mesoclimate. Yet, it is at the microclimate scale that organisms interact with their physiochemical environments. Thus, to predict future changes in the biota such as biodiversity and distribution patterns, a quantitative coupling between macro-, meso-, and microclimatic parameters must be developed. We are evaluating the impact of climate change on the size and elevational distribution of conifer mountain forests by determining the microclimate necessary for new seedling survival at the elevational boundaries of the forest. This initial life stage, only a few centimeters away from the soil surface, appears to be the bottleneck to treeline migration and the expansion or contraction of a conifer mountain forest. For example, survival at the alpine treeline is extremely rare and appears to be limited to facilitated microsites with low sky exposure. Yet, abundant mesoclimate data from standard weather stations have rarely been scaled to the microclimate level. Our research is focusing on an empirical downscaling approach linking microclimate measurements at favorable seedling microsites to the meso- and macro-climate levels. Specifically, mesoclimate values of air temperature, relative humidity, incident sunlight, and wind speed from NOAA NCEI weather stations can be extrapolated to the microsite level that is physiologically relevant for seedling survival. Data will be presented showing a strong correlation between incident sunlight measured at 2-m and seedling microclimate, despite large differences from seedling/microsite temperatures. Our downscaling approach will ultimately enable predictions of microclimate from the much more abundant mesoclimate data available from a variety of sources. Thus, scaling from macro- to meso- to microclimate will be possible, enabling predictions of climate change models to be translated to the microsite level. This linkage between measurement scales will enable a more precise prediction of the effects of climate change on the future extent and elevational distribution of our mountain forests and an accompanying array of critical ecosystem services.

  9. Observing the Cosmic Microwave Background Polarization with Variable-delay Polarization Modulators for the Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; CLASS Collaboration

    2018-01-01

    The search for inflationary primordial gravitational waves and the optical depth to reionization, both through their imprint on the large angular scale correlations in the polarization of the cosmic microwave background (CMB), has created the need for high sensitivity measurements of polarization across large fractions of the sky at millimeter wavelengths. These measurements are subjected to instrumental and atmospheric 1/f noise, which has motivated the development of polarization modulators to facilitate the rejection of these large systematic effects.Variable-delay polarization modulators (VPMs) are used in the Cosmology Large Angular Scale Surveyor (CLASS) telescopes as the first element in the optical chain to rapidly modulate the incoming polarization. VPMs consist of a linearly polarizing wire grid in front of a moveable flat mirror; varying the distance between the grid and the mirror produces a changing phase shift between polarization states parallel and perpendicular to the grid which modulates Stokes U (linear polarization at 45°) and Stokes V (circular polarization). The reflective and scalable nature of the VPM enables its placement as the first optical element in a reflecting telescope. This simultaneously allows a lock-in style polarization measurement and the separation of sky polarization from any instrumental polarization farther along in the optical chain.The Q-Band CLASS VPM was the first VPM to begin observing the CMB full time in 2016. I will be presenting its design and characterization as well as demonstrating how modulating polarization significantly rejects atmospheric and instrumental long time scale noise.

  10. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  11. From Pleistocene to Holocene: the prehistory of southwest Asia in evolutionary context.

    PubMed

    Watkins, Trevor

    2017-08-14

    In this paper I seek to show how cultural niche construction theory offers the potential to extend the human evolutionary story beyond the Pleistocene, through the Neolithic, towards the kind of very large-scale societies in which we live today. The study of the human past has been compartmentalised, each compartment using different analytical vocabularies, so that their accounts are written in mutually incompatible languages. In recent years social, cognitive and cultural evolutionary theories, building on a growing body of archaeological evidence, have made substantial sense of the social and cultural evolution of the genus Homo. However, specialists in this field of studies have found it difficult to extend their kind of analysis into the Holocene human world. Within southwest Asia the three or four millennia of the Neolithic period at the beginning of the Holocene represents a pivotal point, which saw the transformation of human society in the emergence of the first large-scale, permanent communities, the domestication of plants and animals, and the establishment of effective farming economies. Following the Neolithic, the pace of human social, economic and cultural evolution continued to increase. By 5000 years ago, in parts of southwest Asia and northeast Africa there were very large-scale urban societies, and the first large-scale states (kingdoms). An extension of cultural niche construction theory enables us to extend the evolutionary narrative of the Pleistocene into the Holocene, opening the way to developing a single, long-term, evolutionary account of human history.

  12. The influence of cognitive load on spatial search performance.

    PubMed

    Longstaffe, Kate A; Hood, Bruce M; Gilchrist, Iain D

    2014-01-01

    During search, executive function enables individuals to direct attention to potential targets, remember locations visited, and inhibit distracting information. In the present study, we investigated these executive processes in large-scale search. In our tasks, participants searched a room containing an array of illuminated locations embedded in the floor. The participants' task was to press the switches at the illuminated locations on the floor so as to locate a target that changed color when pressed. The perceptual salience of the search locations was manipulated by having some locations flashing and some static. Participants were more likely to search at flashing locations, even when they were explicitly informed that the target was equally likely to be at any location. In large-scale search, attention was captured by the perceptual salience of the flashing lights, leading to a bias to explore these targets. Despite this failure of inhibition, participants were able to restrict returns to previously visited locations, a measure of spatial memory performance. Participants were more able to inhibit exploration to flashing locations when they were not required to remember which locations had previously been visited. A concurrent digit-span memory task further disrupted inhibition during search, as did a concurrent auditory attention task. These experiments extend a load theory of attention to large-scale search, which relies on egocentric representations of space. High cognitive load on working memory leads to increased distractor interference, providing evidence for distinct roles for the executive subprocesses of memory and inhibition during large-scale search.

  13. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    NASA Astrophysics Data System (ADS)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  14. Lipid Vesicle Shape Analysis from Populations Using Light Video Microscopy and Computer Vision

    PubMed Central

    Zupanc, Jernej; Drašler, Barbara; Boljte, Sabina; Kralj-Iglič, Veronika; Iglič, Aleš; Erdogmus, Deniz; Drobne, Damjana

    2014-01-01

    We present a method for giant lipid vesicle shape analysis that combines manually guided large-scale video microscopy and computer vision algorithms to enable analyzing vesicle populations. The method retains the benefits of light microscopy and enables non-destructive analysis of vesicles from suspensions containing up to several thousands of lipid vesicles (1–50 µm in diameter). For each sample, image analysis was employed to extract data on vesicle quantity and size distributions of their projected diameters and isoperimetric quotients (measure of contour roundness). This process enables a comparison of samples from the same population over time, or the comparison of a treated population to a control. Although vesicles in suspensions are heterogeneous in sizes and shapes and have distinctively non-homogeneous distribution throughout the suspension, this method allows for the capture and analysis of repeatable vesicle samples that are representative of the population inspected. PMID:25426933

  15. Preserving Smart Objects Privacy through Anonymous and Accountable Access Control for a M2M-Enabled Internet of Things

    PubMed Central

    Hernández-Ramos, José L.; Bernabe, Jorge Bernal; Moreno, M. Victoria; Skarmeta, Antonio F.

    2015-01-01

    As we get into the Internet of Things era, security and privacy concerns remain as the main obstacles in the development of innovative and valuable services to be exploited by society. Given the Machine-to-Machine (M2M) nature of these emerging scenarios, the application of current privacy-friendly technologies needs to be reconsidered and adapted to be deployed in such global ecosystem. This work proposes different privacy-preserving mechanisms through the application of anonymous credential systems and certificateless public key cryptography. The resulting alternatives are intended to enable an anonymous and accountable access control approach to be deployed on large-scale scenarios, such as Smart Cities. Furthermore, the proposed mechanisms have been deployed on constrained devices, in order to assess their suitability for a secure and privacy-preserving M2M-enabled Internet of Things. PMID:26140349

  16. Preserving Smart Objects Privacy through Anonymous and Accountable Access Control for a M2M-Enabled Internet of Things.

    PubMed

    Hernández-Ramos, José L; Bernabe, Jorge Bernal; Moreno, M Victoria; Skarmeta, Antonio F

    2015-07-01

    As we get into the Internet of Things era, security and privacy concerns remain as the main obstacles in the development of innovative and valuable services to be exploited by society. Given the Machine-to-Machine (M2M) nature of these emerging scenarios, the application of current privacy-friendly technologies needs to be reconsidered and adapted to be deployed in such global ecosystem. This work proposes different privacy-preserving mechanisms through the application of anonymous credential systems and certificateless public key cryptography. The resulting alternatives are intended to enable an anonymous and accountable access control approach to be deployed on large-scale scenarios, such as Smart Cities. Furthermore, the proposed mechanisms have been deployed on constrained devices, in order to assess their suitability for a secure and privacy-preserving M2M-enabled Internet of Things.

  17. 2000 Numerical Propulsion System Simulation Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Greg; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac

    2001-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia, and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective. high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA'S Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 1999 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2000 by the High Performance Computing and Communications Program.

  18. 2001 Numerical Propulsion System Simulation Review

    NASA Technical Reports Server (NTRS)

    Lytle, John; Follen, Gregory; Naiman, Cynthia; Veres, Joseph; Owen, Karl; Lopez, Isaac

    2002-01-01

    The technologies necessary to enable detailed numerical simulations of complete propulsion systems are being developed at the NASA Glenn Research Center in cooperation with industry, academia and other government agencies. Large scale, detailed simulations will be of great value to the nation because they eliminate some of the costly testing required to develop and certify advanced propulsion systems. In addition, time and cost savings will be achieved by enabling design details to be evaluated early in the development process before a commitment is made to a specific design. This concept is called the Numerical Propulsion System Simulation (NPSS). NPSS consists of three main elements: (1) engineering models that enable multidisciplinary analysis of large subsystems and systems at various levels of detail, (2) a simulation environment that maximizes designer productivity, and (3) a cost-effective, high-performance computing platform. A fundamental requirement of the concept is that the simulations must be capable of overnight execution on easily accessible computing platforms. This will greatly facilitate the use of large-scale simulations in a design environment. This paper describes the current status of the NPSS with specific emphasis on the progress made over the past year on air breathing propulsion applications. Major accomplishments include the first formal release of the NPSS object-oriented architecture (NPSS Version 1) and the demonstration of a one order of magnitude reduction in computing cost-to-performance ratio using a cluster of personal computers. The paper also describes the future NPSS milestones, which include the simulation of space transportation propulsion systems in response to increased emphasis on safe, low cost access to space within NASA's Aerospace Technology Enterprise. In addition, the paper contains a summary of the feedback received from industry partners on the fiscal year 2000 effort and the actions taken over the past year to respond to that feedback. NPSS was supported in fiscal year 2001 by the High Performance Computing and Communications Program.

  19. Embarking on large-scale qualitative research: reaping the benefits of mixed methods in studying youth, clubs and drugs

    PubMed Central

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2012-01-01

    Qualitative research is often conceptualized as inherently small-scale research, primarily conducted by a lone researcher enmeshed in extensive and long-term fieldwork or involving in-depth interviews with a small sample of 20 to 30 participants. In the study of illicit drugs, traditionally this has often been in the form of ethnographies of drug-using subcultures. Such small-scale projects have produced important interpretive scholarship that focuses on the culture and meaning of drug use in situated, embodied contexts. Larger-scale projects are often assumed to be solely the domain of quantitative researchers, using formalistic survey methods and descriptive or explanatory models. In this paper, however, we will discuss qualitative research done on a comparatively larger scale—with in-depth qualitative interviews with hundreds of young drug users. Although this work incorporates some quantitative elements into the design, data collection, and analysis, the qualitative dimension and approach has nevertheless remained central. Larger-scale qualitative research shares some of the challenges and promises of smaller-scale qualitative work including understanding drug consumption from an emic perspective, locating hard-to-reach populations, developing rapport with respondents, generating thick descriptions and a rich analysis, and examining the wider socio-cultural context as a central feature. However, there are additional challenges specific to the scale of qualitative research, which include data management, data overload and problems of handling large-scale data sets, time constraints in coding and analyzing data, and personnel issues including training, organizing and mentoring large research teams. Yet large samples can prove to be essential for enabling researchers to conduct comparative research, whether that be cross-national research within a wider European perspective undertaken by different teams or cross-cultural research looking at internal divisions and differences within diverse communities and cultures. PMID:22308079

  20. Lessons Learned from Managing a Petabyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, J

    2005-01-20

    The amount of data collected and stored by the average business doubles each year. Many commercial databases are already approaching hundreds of terabytes, and at this rate, will soon be managing petabytes. More data enables new functionality and capability, but the larger scale reveals new problems and issues hidden in ''smaller'' terascale environments. This paper presents some of these new problems along with implemented solutions in the framework of a petabyte dataset for a large High Energy Physics experiment. Through experience with two persistence technologies, a commercial database and a file-based approach, we expose format-independent concepts and issues prevalent atmore » this new scale of computing.« less

  1. Thermal System Verification and Model Validation for NASA's Cryogenic Passively Cooled James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cleveland, Paul E.; Parrish, Keith A.

    2005-01-01

    A thorough and unique thermal verification and model validation plan has been developed for NASA s James Webb Space Telescope. The JWST observatory consists of a large deployed aperture optical telescope passively cooled to below 50 Kelvin along with a suite of several instruments passively and actively cooled to below 37 Kelvin and 7 Kelvin, respectively. Passive cooling to these extremely low temperatures is made feasible by the use of a large deployed high efficiency sunshield and an orbit location at the L2 Lagrange point. Another enabling feature is the scale or size of the observatory that allows for large radiator sizes that are compatible with the expected power dissipation of the instruments and large format Mercury Cadmium Telluride (HgCdTe) detector arrays. This passive cooling concept is simple, reliable, and mission enabling when compared to the alternatives of mechanical coolers and stored cryogens. However, these same large scale observatory features, which make passive cooling viable, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone to most space missions thermal verification plan. JWST is simply too large in its deployed configuration to be properly thermal balance tested in the facilities that currently exist. This reality, when combined with a mission thermal concept with little to no flight heritage, has necessitated the need for a unique and alternative approach to thermal system verification and model validation. This paper describes the thermal verification and model validation plan that has been developed for JWST. The plan relies on judicious use of cryogenic and thermal design margin, a completely independent thermal modeling cross check utilizing different analysis teams and software packages, and finally, a comprehensive set of thermal tests that occur at different levels of JWST assembly. After a brief description of the JWST mission and thermal architecture, a detailed description of the three aspects of the thermal verification and model validation plan is presented.

  2. Roll-to-Roll Nanoforming of Metals Using Laser-Induced Superplasticity.

    PubMed

    Goswami, Debkalpa; Munera, Juan C; Pal, Aniket; Sadri, Behnam; Scarpetti, Caio Lui P G; Martinez, Ramses V

    2018-05-24

    This Letter describes a low-cost, scalable nanomanufacturing process that enables the continuous forming of thin metallic layers with nanoscale accuracy using roll-to-roll, laser-induced superplasticity (R2RLIS). R2RLIS uses a laser shock to induce the ultrahigh-strain-rate deformation of metallic films at room temperature into low-cost polymeric nanomolds, independently of the original grain size of the metal. This simple and inexpensive nanoforming method does not require access to cleanrooms and associated facilities, and can be easily implemented on conventional CO 2 lasers, enabling laser systems commonly used for rapid prototyping or industrial cutting and engraving to fabricate uniform and three-dimensional crystalline metallic nanostructures over large areas. Tuning the laser power during the R2RLIS process enables the control of the aspect ratio and the mechanical and optical properties of the fabricated nanostructures. This roll-to-roll technique successfully fabricates mechanically strengthened gold plasmonic nanostructures with aspect ratios as high as 5 that exhibit high oxidation resistance and strong optical field enhancements. The CO 2 laser used in R2RLIS can also integrate the fabricated nanostructures on transparent flexible substrates with robust interfacial contact. The ability to fabricate ultrasmooth metallic nanostructures using roll-to-roll manufacturing enables the large scale production, at a relatively low-cost, of flexible plasmonic devices toward emerging applications.

  3. The Child Play Behavior and Activity Questionnaire: A Parent-Report Measure of Childhood Gender-Related Behavior in China

    PubMed Central

    Winter, Sam; Xie, Dong

    2008-01-01

    Boys and girls establish relatively stable gender stereotyped behavior patterns by middle childhood. Parent-report questionnaires measuring children’s gender-related behavior enable researchers to conduct large-scale screenings of community samples of children. For school-aged children, two parent-report instruments, the Child Game Participation Questionnaire (CGPQ) and the Child Behavior and Attitude Questionnaire (CBAQ), have long been used for measuring children’s sex-dimorphic behaviors in Western societies, but few studies have been conducted using these measures for Chinese populations. The current study aimed to empirically examine and modify the two instruments for their applications to Chinese society. Parents of 486 Chinese boys and 417 Chinese girls (6–12 years old) completed a questionnaire comprising items from the CGPQ and CBAQ, and an additional 14 items specifically related to Chinese gender-specific games. Items revealing gender differences in a Chinese sample were identified and used to construct a Child Play Behavior and Activity Questionnaire (CPBAQ). Four new scales were generated through factor analysis: a Gender Scale, a Girl Typicality Scale, a Boy Typicality Scale, and a Cross-Gender Scale (CGS). These scales had satisfactory internal reliabilities and large effect sizes for gender. The CPBAQ is believed to be a promising instrument for measuring children’s gender-related behavior in China. PMID:18719986

  4. Infrared Extinction and Stellar Populations in the Milky Way Midplane

    NASA Astrophysics Data System (ADS)

    Zasowski, Gail; Majewski, S. R.; Benjamin, R. A.; Nidever, D. L.; Skrutskie, M. F.; Indebetouw, R.; Patterson, R. J.; Meade, M. R.; Whitney, B. A.; Babler, B.; Churchwell, E.; Watson, C.

    2012-01-01

    The primary laboratory for developing and testing models of galaxy formation, structure, and evolution is our own Milky Way, the closest large galaxy and the only one in which we can resolve large numbers of individual stars. The recent availability of extensive stellar surveys, particularly infrared ones, has enabled precise, contiguous measurement of large-scale Galactic properties, a major improvement over inferences based on selected, but scattered, sightlines. However, our ability to fully exploit the Milky Way as a galactic laboratory is severely hampered by the fact that its midplane and central bulge -- where most of the Galactic stellar mass lies -- is heavily obscured by interstellar dust. Therefore, proper consideration of the interstellar extinction is crucial. This thesis describes a new extinction-correction method (the RJCE method) that measures the foreground extinction towards each star and, in many cases, enables recovery of its intrinsic stellar type. We have demonstrated the RJCE Method's validity and used it to produce new, reliable extinction maps of the heavily-reddened Galactic midplane. Taking advantage of the recovered stellar type information, we have generated maps probing the extinction at different heliocentric distances, thus yielding information on the elusive three-dimensional distribution of the interstellar dust. We also performed a study of the interstellar extinction law itself which revealed variations previously undetected in the diffuse ISM and established constraints on models of ISM grain formation and evolution. Furthermore, we undertook a study of large-scale stellar structure in the inner Galaxy -- the bar(s), bulge(s), and inner spiral arms. We used observed and extinction-corrected infrared photometry to map the coherent stellar features in these heavily-obscured parts of the Galaxy, placing constraints on models of the central stellar mass distribution.

  5. Small-scale volcanoes on Mars: distribution and types

    NASA Astrophysics Data System (ADS)

    Broz, Petr; Hauber, Ernst

    2015-04-01

    Volcanoes differ in sizes, as does the amount of magma which ascends to a planetary surface. On Earth, the size of volcanoes is anti-correlated with their frequency, i.e. small volcanoes are much more numerous than large ones. The most common terrestrial volcanoes are scoria cones (

  6. NDEx - the Network Data Exchange, A Network Commons for Biologists | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Network models of biology, whether curated or derived from large-scale data analysis, are critical tools in the understanding of cancer mechanisms and in the design and personalization of therapies. The NDEx Project (Network Data Exchange) will create, deploy, and maintain an open-source, web-based software platform and public website to enable scientists, organizations, and software applications to share, store, manipulate, and publish biological networks.

  7. Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments

    DTIC Science & Technology

    2017-06-13

    reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must

  8. III-V Semiconductor Optical Micro-Ring Resonators

    NASA Astrophysics Data System (ADS)

    Grover, Rohit; Absil, Philippe P.; Ibrahim, Tarek A.; Ho, Ping-Tong

    2004-05-01

    We describe the theory of optical ring resonators, and our work on GaAs-AlGaAs and GaInAsP-InP optical micro-ring resonators. These devices are promising building blocks for future all-optical signal processing and photonic logic circuits. Their versatility allows the fabrication of ultra-compact multiplexers/demultiplexers, optical channel dropping filters, lasers, amplifiers, and logic gates (to name a few), which will enable large-scale monolithic integration for optics.

  9. Sunlight-Driven Forging of Amide/Ester Bonds from Three Independent Components: An Approach to Carbamates.

    PubMed

    Zhao, Yating; Huang, Binbin; Yang, Chao; Chen, Qingqing; Xia, Wujiong

    2016-11-04

    A photoredox catalytic route to carbamates enabled by visible irradiation (or simply sunlight) has been developed. This process leads to a novel approach to the construction of heterocyclic rings wherein the amide or ester motifs of carbamates were assembled from three isolated components. Large-scale experiments were realized by employing continuous flow techniques, and reuse of photocatalyst demonstrated the green and sustainable aspects of this method.

  10. Mission Command in the Age of Network-Enabled Operations: Social Network Analysis of Information Sharing and Situation Awareness

    DTIC Science & Technology

    2016-06-22

    this assumption in a large-scale, 2-week military training exercise. We conducted a social network analysis of email communications among the multi...exponential random graph models challenge the aforementioned assumption, as increased email output was associated with lower individual situation... email links were more commonly formed among members of the command staff with both similar functions and levels of situation awareness, than between

  11. Design of a large-scale femtoliter droplet array for single-cell analysis of drug-tolerant and drug-resistant bacteria.

    PubMed

    Iino, Ryota; Matsumoto, Yoshimi; Nishino, Kunihiko; Yamaguchi, Akihito; Noji, Hiroyuki

    2013-01-01

    Single-cell analysis is a powerful method to assess the heterogeneity among individual cells, enabling the identification of very rare cells with properties that differ from those of the majority. In this Methods Article, we describe the use of a large-scale femtoliter droplet array to enclose, isolate, and analyze individual bacterial cells. As a first example, we describe the single-cell detection of drug-tolerant persisters of Pseudomonas aeruginosa treated with the antibiotic carbenicillin. As a second example, this method was applied to the single-cell evaluation of drug efflux activity, which causes acquired antibiotic resistance of bacteria. The activity of the MexAB-OprM multidrug efflux pump system from Pseudomonas aeruginosa was expressed in Escherichia coli and the effect of an inhibitor D13-9001 were assessed at the single cell level.

  12. High–energy density nonaqueous all redox flow lithium battery enabled with a polymeric membrane

    PubMed Central

    Jia, Chuankun; Pan, Feng; Zhu, Yun Guang; Huang, Qizhao; Lu, Li; Wang, Qing

    2015-01-01

    Redox flow batteries (RFBs) are considered one of the most promising large-scale energy storage technologies. However, conventional RFBs suffer from low energy density due to the low solubility of the active materials in electrolyte. On the basis of the redox targeting reactions of battery materials, the redox flow lithium battery (RFLB) demonstrated in this report presents a disruptive approach to drastically enhancing the energy density of flow batteries. With LiFePO4 and TiO2 as the cathodic and anodic Li storage materials, respectively, the tank energy density of RFLB could reach ~500 watt-hours per liter (50% porosity), which is 10 times higher than that of a vanadium redox flow battery. The cell exhibits good electrochemical performance under a prolonged cycling test. Our prototype RFLB full cell paves the way toward the development of a new generation of flow batteries for large-scale energy storage. PMID:26702440

  13. Techniques for control of long-term reliability of complex integrated circuits. I - Reliability assurance by test vehicle qualification.

    NASA Technical Reports Server (NTRS)

    Van Vonno, N. W.

    1972-01-01

    Development of an alternate approach to the conventional methods of reliability assurance for large-scale integrated circuits. The product treated is a large-scale T squared L array designed for space applications. The concept used is that of qualification of product by evaluation of the basic processing used in fabricating the product, providing an insight into its potential reliability. Test vehicles are described which enable evaluation of device characteristics, surface condition, and various parameters of the two-level metallization system used. Evaluation of these test vehicles is performed on a lot qualification basis, with the lot consisting of one wafer. Assembled test vehicles are evaluated by high temperature stress at 300 C for short time durations. Stressing at these temperatures provides a rapid method of evaluation and permits a go/no go decision to be made on the wafer lot in a timely fashion.

  14. Crowdsourced 'R&D' and medical research.

    PubMed

    Callaghan, Christian William

    2015-09-01

    Crowdsourced R&D, a research methodology increasingly applied to medical research, has properties well suited to large-scale medical data collection and analysis, as well as enabling rapid research responses to crises such as disease outbreaks. Multidisciplinary literature offers diverse perspectives of crowdsourced R&D as a useful large-scale medical data collection and research problem-solving methodology. Crowdsourced R&D has demonstrated 'proof of concept' in a host of different biomedical research applications. A wide range of quality and ethical issues relate to crowdsourced R&D. The rapid growth in applications of crowdsourced R&D in medical research is predicted by an increasing body of multidisciplinary theory. Further research in areas such as artificial intelligence may allow better coordination and management of the high volumes of medical data and problem-solving inputs generated by the crowdsourced R&D process. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  16. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  17. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  18. Joint Estimation of the Epoch of Reionization Power Spectrum and Foregrounds

    NASA Astrophysics Data System (ADS)

    Sims, Peter; Pober, Jonathan

    2018-01-01

    Bright astrophysical foregrounds present a significant impediment to the detection of redshifted 21-cm emission from the Epoch of Reionization on large spatial scales. In this talk I present a framework for the joint modeling of the power spectral contamination by astrophysical foregrounds and the power spectrum of the Epoch of Reionization. I show how informative priors on the power spectral contamination by astrophysical foregrounds at high redshifts, where emission from both the Epoch of Reionization and its foregrounds is present in the data, can be obtained through analysis of foreground-only emission at lower redshifts. Finally, I demonstrate how, by using such informative foreground priors, joint modeling can be employed to mitigate bias in estimates of the power spectrum of the Epoch of Reionization signal and, in particular, to enable recovery of more robust power spectral estimates on large spatial scales.

  19. Improved uniformity in high-performance organic photovoltaics enabled by (3-aminopropyl)triethoxysilane cathode functionalization.

    PubMed

    Luck, Kyle A; Shastry, Tejas A; Loser, Stephen; Ogien, Gabriel; Marks, Tobin J; Hersam, Mark C

    2013-12-28

    Organic photovoltaics have the potential to serve as lightweight, low-cost, mechanically flexible solar cells. However, losses in efficiency as laboratory cells are scaled up to the module level have to date impeded large scale deployment. Here, we report that a 3-aminopropyltriethoxysilane (APTES) cathode interfacial treatment significantly enhances performance reproducibility in inverted high-efficiency PTB7:PC71BM organic photovoltaic cells, as demonstrated by the fabrication of 100 APTES-treated devices versus 100 untreated controls. The APTES-treated devices achieve a power conversion efficiency of 8.08 ± 0.12% with histogram skewness of -0.291, whereas the untreated controls achieve 7.80 ± 0.26% with histogram skewness of -1.86. By substantially suppressing the interfacial origins of underperforming cells, the APTES treatment offers a pathway for fabricating large-area modules with high spatial performance uniformity.

  20. Multiplexed analysis of protein-ligand interactions by fluorescence anisotropy in a microfluidic platform.

    PubMed

    Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F

    2014-10-07

    Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.

  1. Endocytic reawakening of motility in jammed epithelia

    NASA Astrophysics Data System (ADS)

    Malinverno, Chiara; Corallino, Salvatore; Giavazzi, Fabio; Bergert, Martin; Li, Qingsen; Leoni, Marco; Disanza, Andrea; Frittoli, Emanuela; Oldani, Amanda; Martini, Emanuele; Lendenmann, Tobias; Deflorian, Gianluca; Beznoussenko, Galina V.; Poulikakos, Dimos; Ong, Kok Haur; Uroz, Marina; Trepat, Xavier; Parazzoli, Dario; Maiuri, Paolo; Yu, Weimiao; Ferrari, Aldo; Cerbino, Roberto; Scita, Giorgio

    2017-05-01

    Dynamics of epithelial monolayers has recently been interpreted in terms of a jamming or rigidity transition. How cells control such phase transitions is, however, unknown. Here we show that RAB5A, a key endocytic protein, is sufficient to induce large-scale, coordinated motility over tens of cells, and ballistic motion in otherwise kinetically arrested monolayers. This is linked to increased traction forces and to the extension of cell protrusions, which align with local velocity. Molecularly, impairing endocytosis, macropinocytosis or increasing fluid efflux abrogates RAB5A-induced collective motility. A simple model based on mechanical junctional tension and an active cell reorientation mechanism for the velocity of self-propelled cells identifies regimes of monolayer dynamics that explain endocytic reawakening of locomotion in terms of a combination of large-scale directed migration and local unjamming. These changes in multicellular dynamics enable collectives to migrate under physical constraints and may be exploited by tumours for interstitial dissemination.

  2. Real-time mapping of the corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy

    NASA Astrophysics Data System (ADS)

    Guthoff, Rudolf F.; Zhivov, Andrey; Stachs, Oliver

    2010-02-01

    The aim of the study was to produce two-dimensional reconstruction maps of the living corneal sub-basal nerve plexus by in vivo laser scanning confocal microscopy in real time. CLSM source data (frame rate 30Hz, 384x384 pixel) were used to create large-scale maps of the scanned area by selecting the Automatic Real Time (ART) composite mode. The mapping algorithm is based on an affine transformation. Microscopy of the sub-basal nerve plexus was performed on normal and LASIK eyes as well as on rabbit eyes. Real-time mapping of the sub-basal nerve plexus was performed in large-scale up to a size of 3.2mm x 3.2mm. The developed method enables a real-time in vivo mapping of the sub-basal nerve plexus which is stringently necessary for statistically firmed conclusions about morphometric plexus alterations.

  3. High-energy density nonaqueous all redox flow lithium battery enabled with a polymeric membrane.

    PubMed

    Jia, Chuankun; Pan, Feng; Zhu, Yun Guang; Huang, Qizhao; Lu, Li; Wang, Qing

    2015-11-01

    Redox flow batteries (RFBs) are considered one of the most promising large-scale energy storage technologies. However, conventional RFBs suffer from low energy density due to the low solubility of the active materials in electrolyte. On the basis of the redox targeting reactions of battery materials, the redox flow lithium battery (RFLB) demonstrated in this report presents a disruptive approach to drastically enhancing the energy density of flow batteries. With LiFePO4 and TiO2 as the cathodic and anodic Li storage materials, respectively, the tank energy density of RFLB could reach ~500 watt-hours per liter (50% porosity), which is 10 times higher than that of a vanadium redox flow battery. The cell exhibits good electrochemical performance under a prolonged cycling test. Our prototype RFLB full cell paves the way toward the development of a new generation of flow batteries for large-scale energy storage.

  4. Large-scale production of human pluripotent stem cell derived cardiomyocytes.

    PubMed

    Kempf, Henning; Andree, Birgit; Zweigerdt, Robert

    2016-01-15

    Regenerative medicine, including preclinical studies in large animal models and tissue engineering approaches as well as innovative assays for drug discovery, will require the constant supply of hPSC-derived cardiomyocytes and other functional progenies. Respective cell production processes must be robust, economically viable and ultimately GMP-compliant. Recent research has enabled transition of lab scale protocols for hPSC expansion and cardiomyogenic differentiation towards more controlled processing in industry-compatible culture platforms. Here, advanced strategies for the cultivation and differentiation of hPSCs will be reviewed by focusing on stirred bioreactor-based techniques for process upscaling. We will discuss how cardiomyocyte mass production might benefit from recent findings such as cell expansion at the cardiovascular progenitor state. Finally, remaining challenges will be highlighted, specifically regarding three dimensional (3D) hPSC suspension culture and critical safety issues ahead of clinical translation. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Towards AI-powered personalization in MOOC learning

    NASA Astrophysics Data System (ADS)

    Yu, Han; Miao, Chunyan; Leung, Cyril; White, Timothy John

    2017-12-01

    Massive Open Online Courses (MOOCs) represent a form of large-scale learning that is changing the landscape of higher education. In this paper, we offer a perspective on how advances in artificial intelligence (AI) may enhance learning and research on MOOCs. We focus on emerging AI techniques including how knowledge representation tools can enable students to adjust the sequence of learning to fit their own needs; how optimization techniques can efficiently match community teaching assistants to MOOC mediation tasks to offer personal attention to learners; and how virtual learning companions with human traits such as curiosity and emotions can enhance learning experience on a large scale. These new capabilities will also bring opportunities for educational researchers to analyse students' learning skills and uncover points along learning paths where students with different backgrounds may require different help. Ethical considerations related to the application of AI in MOOC education research are also discussed.

  6. Minimization of socioeconomic disruption for displaced populations following disasters.

    PubMed

    El-Anwar, Omar; El-Rayes, Khaled; Elnashai, Amr

    2010-07-01

    In the aftermath of catastrophic natural disasters such as hurricanes, tsunamis and earthquakes, emergency management agencies come under intense pressure to provide temporary housing to address the large-scale displacement of the vulnerable population. Temporary housing is essential to enable displaced families to reestablish their normal daily activities until permanent housing solutions can be provided. Temporary housing decisions, however, have often been criticized for their failure to fulfil the socioeconomic needs of the displaced families within acceptable budgets. This paper presents the development of (1) socioeconomic disruption metrics that are capable of quantifying the socioeconomic impacts of temporary housing decisions on displaced populations; and (2) a robust multi-objective optimization model for temporary housing that is capable of simultaneously minimizing socioeconomic disruptions and public expenditures in an effective and efficient manner. A large-scale application example is optimized to illustrate the use of the model and demonstrate its capabilities ingenerating optimal plans for realistic temporary housing problems.

  7. Item Banking Enables Stand-Alone Measurement of Driving Ability.

    PubMed

    Khadka, Jyoti; Fenwick, Eva K; Lamoureux, Ecosse L; Pesudovs, Konrad

    2016-12-01

    To explore whether large item sets, as used in item banking, enable important latent traits, such as driving, to form stand-alone measures. The 88-item activity limitation (AL) domain of the glaucoma module of the Eye-tem Bank was interviewer-administered to patients with glaucoma. Rasch analysis was used to calibrate all items in AL domain on the same interval-level scale and test its psychometric properties. Based on Rasch dimensionality metrics, the AL scale was separated into subscales. These subscales underwent separate Rasch analyses to test whether they could form stand-alone measures. Independence of these measures was tested with Bland and Altman (B&A) Limit of Agreement (LOA). The AL scale was completed by 293 patients (median age, 71 years). It demonstrated excellent precision (3.12). However, Rasch analysis dimensionality metrics indicated that the domain arguably had other dimensions which were driving, luminance, and reading. Once separated, the remaining AL items, driving and luminance subscales, were unidimensional and had excellent precision of 4.25, 2.94, and 2.22, respectively. The reading subscale showed poor precision (1.66), so it was not examined further. The luminance subscale demonstrated excellent agreement (mean bias, 0.2 logit; 95% LOA, -2.2 to 3.3 logit); however, the driving subscale demonstrated poor agreement (mean bias, 1.1 logit; 95% LOA, -4.8 to 7.0 logit) with the AL scale. These findings indicate that driving items in the AL domain of the glaucoma module were perceived and responded to differently from the other AL items, but the reading and luminance items were not. Therefore, item banking enables stand-alone measurement of driving ability in glaucoma.

  8. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  9. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  10. Flexible Ablators: Applications and Arcjet Testing

    NASA Technical Reports Server (NTRS)

    Arnold, James O.; Venkatapathy, Ethiraj; Beck, Robin A S.; Mcguire, Kathy; Prabhu, Dinesh K.; Gorbunov, Sergey

    2011-01-01

    Flexible ablators were conceived in 2009 to meet the technology pull for large, human Mars Exploration Class, 23 m diameter hypersonic inflatable aerodynamic decelerators. As described elsewhere, they have been recently undergoing initial technical readiness (TRL) advancement by NASA. The performance limits of flexible ablators in terms of maximum heat rates, pressure and shear remain to be defined. Further, it is hoped that this emerging technology will vastly expand the capability of future NASA missions involving atmospheric entry systems. This paper considers four topics of relevance to flexible ablators: (1) Their potential applications to near/far term human and robotic missions (2) Brief consideration of the balance between heat shield diameter, flexible ablator performance limits, entry vehicle controllability and aft-body shear layer impingement of interest to designers of very large entry vehicles, (3) The approach for developing bonding processes of flexible ablators for use on rigid entry bodies and (4) Design of large arcjet test articles that will enable the testing of flexible ablators in flight-like, combined environments (heat flux, pressure, shear and structural tensile loading). Based on a review of thermal protection system performance requirements for future entry vehicles, it is concluded that flexible ablators have broad applications to conventional, rigid entry body systems and are enabling to large deployable (both inflatable and mechanical) heat shields. Because of the game-changing nature of flexible ablators, it appears that NASA's Office of the Chief Technologist (OCT) will fund a focused, 3-year TRL advancement of the new materials capable of performance in heat fluxes in the range of 200-600 W/sq. cm. This support will enable the manufacture and use of the large-scale arcjet test designs that will be a key element of this OCT funded activity.

  11. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; McMahon, Jeff; Miller, Nathan T.; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián.; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2016-07-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  12. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  13. The Multi-Scale Network Landscape of Collaboration.

    PubMed

    Bae, Arram; Park, Doheum; Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena--which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.

  14. The Multi-Scale Network Landscape of Collaboration

    PubMed Central

    Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena—which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists. PMID:26990088

  15. Highly flexible electronics from scalable vertical thin film transistors.

    PubMed

    Liu, Yuan; Zhou, Hailong; Cheng, Rui; Yu, Woojong; Huang, Yu; Duan, Xiangfeng

    2014-03-12

    Flexible thin-film transistors (TFTs) are of central importance for diverse electronic and particularly macroelectronic applications. The current TFTs using organic or inorganic thin film semiconductors are usually limited by either poor electrical performance or insufficient mechanical flexibility. Here, we report a new design of highly flexible vertical TFTs (VTFTs) with superior electrical performance and mechanical robustness. By using the graphene as a work-function tunable contact for amorphous indium gallium zinc oxide (IGZO) thin film, the vertical current flow across the graphene-IGZO junction can be effectively modulated by an external gate potential to enable VTFTs with a highest on-off ratio exceeding 10(5). The unique vertical transistor architecture can readily enable ultrashort channel devices with very high delivering current and exceptional mechanical flexibility. With large area graphene and IGZO thin film available, our strategy is intrinsically scalable for large scale integration of VTFT arrays and logic circuits, opening up a new pathway to highly flexible macroelectronics.

  16. Characterizing the replicability of cell types defined by single cell RNA-sequencing data using MetaNeighbor.

    PubMed

    Crow, Megan; Paul, Anirban; Ballouz, Sara; Huang, Z Josh; Gillis, Jesse

    2018-02-28

    Single-cell RNA-sequencing (scRNA-seq) technology provides a new avenue to discover and characterize cell types; however, the experiment-specific technical biases and analytic variability inherent to current pipelines may undermine its replicability. Meta-analysis is further hampered by the use of ad hoc naming conventions. Here we demonstrate our replication framework, MetaNeighbor, that quantifies the degree to which cell types replicate across datasets, and enables rapid identification of clusters with high similarity. We first measure the replicability of neuronal identity, comparing results across eight technically and biologically diverse datasets to define best practices for more complex assessments. We then apply this to novel interneuron subtypes, finding that 24/45 subtypes have evidence of replication, which enables the identification of robust candidate marker genes. Across tasks we find that large sets of variably expressed genes can identify replicable cell types with high accuracy, suggesting a general route forward for large-scale evaluation of scRNA-seq data.

  17. Electromagnetic Design of Feedhorn-Coupled Transition-Edge Sensors for Cosmic Microwave Background Polarimetery

    NASA Technical Reports Server (NTRS)

    Chuss, David T.

    2011-01-01

    Observations of the cosmic microwave background (CMB) provide a powerful tool for probing the evolution of the early universe. Specifically, precision measurement of the polarization of the CMB enables a direct test for cosmic inflation. A key technological element on the path to the measurement of this faint signal is the capability to produce large format arrays of background-limited detectors. We describe the electromagnetic design of feedhorn-coupled, TES-based sensors. Each linear orthogonal polarization from the feed horn is coupled to a superconducting microstrip line via a symmetric planar orthomode transducer (OMT). The symmetric OMT design allows for highly-symmetric beams with low cross-polarization over a wide bandwidth. In addition, this architecture enables a single microstrip filter to define the passband for each polarization. Care has been taken in the design to eliminate stray coupling paths to the absorbers. These detectors will be fielded in the Cosmology Large Angular Scale Surveyor (CLASS).

  18. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-10-06

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. These visual metaphors (e.g., word clouds, tag clouds, etc.) traditionally show a series of terms organized by space-filling algorithms. However, often lacking in these views is the ability to interactively explore the information to gain more detail, and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods,more » Typograh enables multiple levels of detail (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geographic metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  19. High-frequency self-aligned graphene transistors with transferred gate stacks

    PubMed Central

    Cheng, Rui; Bai, Jingwei; Liao, Lei; Zhou, Hailong; Chen, Yu; Liu, Lixin; Lin, Yung-Chen; Jiang, Shan; Huang, Yu; Duan, Xiangfeng

    2012-01-01

    Graphene has attracted enormous attention for radio-frequency transistor applications because of its exceptional high carrier mobility, high carrier saturation velocity, and large critical current density. Herein we report a new approach for the scalable fabrication of high-performance graphene transistors with transferred gate stacks. Specifically, arrays of gate stacks are first patterned on a sacrificial substrate, and then transferred onto arbitrary substrates with graphene on top. A self-aligned process, enabled by the unique structure of the transferred gate stacks, is then used to position precisely the source and drain electrodes with minimized access resistance or parasitic capacitance. This process has therefore enabled scalable fabrication of self-aligned graphene transistors with unprecedented performance including a record-high cutoff frequency up to 427 GHz. Our study defines a unique pathway to large-scale fabrication of high-performance graphene transistors, and holds significant potential for future application of graphene-based devices in ultra–high-frequency circuits. PMID:22753503

  20. Scaling Theory of Entanglement at the Many-Body Localization Transition.

    PubMed

    Dumitrescu, Philipp T; Vasseur, Romain; Potter, Andrew C

    2017-09-15

    We study the universal properties of eigenstate entanglement entropy across the transition between many-body localized (MBL) and thermal phases. We develop an improved real space renormalization group approach that enables numerical simulation of large system sizes and systematic extrapolation to the infinite system size limit. For systems smaller than the correlation length, the average entanglement follows a subthermal volume law, whose coefficient is a universal scaling function. The full distribution of entanglement follows a universal scaling form, and exhibits a bimodal structure that produces universal subleading power-law corrections to the leading volume law. For systems larger than the correlation length, the short interval entanglement exhibits a discontinuous jump at the transition from fully thermal volume law on the thermal side, to pure area law on the MBL side.

  1. Studying the Formation and Development of Molecular Clouds: With the CCAT Heterodyne Array Instrument (CHAI)

    NASA Technical Reports Server (NTRS)

    Goldsmith, Paul F.

    2012-01-01

    Surveys of all different types provide basic data using different tracers. Molecular clouds have structure over a very wide range of scales. Thus, "high resolution" surveys and studies of selected nearby clouds add critical information. The combination of large-area and high resolution allows Increased spatial dynamic range, which in turn enables detection of new and perhaps critical morphology (e.g. filaments). Theoretical modeling has made major progress, and suggests that multiple forces are at work. Galactic-scale modeling also progressing - indicates that stellar feedback is required. Models must strive to reproduce observed cloud structure at all scales. Astrochemical observations are not unrelated to questions of cloud evolution and star formation but we are still learning how to use this capability.

  2. Foster Wheeler's Solutions for Large Scale CFB Boiler Technology: Features and Operational Performance of Łagisza 460 MWe CFB Boiler

    NASA Astrophysics Data System (ADS)

    Hotta, Arto

    During recent years, once-through supercritical (OTSC) CFB technology has been developed, enabling the CFB technology to proceed to medium-scale (500 MWe) utility projects such as Łagisza Power Plant in Poland owned by Poludniowy Koncern Energetyczny SA. (PKE), with net efficiency nearly 44%. Łagisza power plant is currently under commissioning and has reached full load operation in March 2009. The initial operation shows very good performance and confirms, that the CFB process has no problems with the scaling up to this size. Also the once-through steam cycle utilizing Siemens' vertical tube Benson technology has performed as predicted in the CFB process. Foster Wheeler has developed the CFB design further up to 800 MWe with net efficiency of ≥45%.

  3. A Microfluidic Technique to Probe Cell Deformability

    PubMed Central

    Hoelzle, David J.; Varghese, Bino A.; Chan, Clara K.; Rowat, Amy C.

    2014-01-01

    Here we detail the design, fabrication, and use of a microfluidic device to evaluate the deformability of a large number of individual cells in an efficient manner. Typically, data for ~102 cells can be acquired within a 1 hr experiment. An automated image analysis program enables efficient post-experiment analysis of image data, enabling processing to be complete within a few hours. Our device geometry is unique in that cells must deform through a series of micron-scale constrictions, thereby enabling the initial deformation and time-dependent relaxation of individual cells to be assayed. The applicability of this method to human promyelocytic leukemia (HL-60) cells is demonstrated. Driving cells to deform through micron-scale constrictions using pressure-driven flow, we observe that human promyelocytic (HL-60) cells momentarily occlude the first constriction for a median time of 9.3 msec before passaging more quickly through the subsequent constrictions with a median transit time of 4.0 msec per constriction. By contrast, all-trans retinoic acid-treated (neutrophil-type) HL-60 cells occlude the first constriction for only 4.3 msec before passaging through the subsequent constrictions with a median transit time of 3.3 msec. This method can provide insight into the viscoelastic nature of cells, and ultimately reveal the molecular origins of this behavior. PMID:25226269

  4. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Technical Reports Server (NTRS)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  5. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  6. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  7. Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks

    DOE PAGES

    Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy; ...

    2018-02-07

    The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less

  8. Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy

    The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less

  9. Obscuring and Feeding Supermassive Black Holes with Evolving Nuclear Star Clusters

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Burkert, A.; Krause, M.; Camenzind, M.; Meisenheimer, K.; Davies, R. I.

    2010-05-01

    Recently, high-resolution observations made with the help of the near-infrared adaptive optics integral field spectrograph SINFONI at the VLT proved the existence of massive and young nuclear star clusters in the centers of a sample of Seyfert galaxies. With the help of high-resolution hydrodynamical simulations with the pluto code, we follow the evolution of such clusters, especially focusing on mass and energy feedback from young stars. This leads to a filamentary inflow of gas on large scales (tens of parsecs), whereas a turbulent and very dense disk builds up on the parsec scale. Here we concentrate on the long-term evolution of the nuclear disk in NGC 1068 with the help of an effective viscous disk model, using the mass input from the large-scale simulations and accounting for star formation in the disk. This two-stage modeling enables us to connect the tens-of-parsecs scale region (observable with SINFONI) with the parsec-scale environment (MIDI observations). At the current age of the nuclear star cluster, our simulations predict disk sizes of the order 0.8 to 0.9 pc, gas masses of order 106 M⊙, and mass transfer rates through the inner boundary of order 0.025 M⊙ yr-1, in good agreement with values derived from observations.

  10. A global/local affinity graph for image segmentation.

    PubMed

    Xiaofang Wang; Yuxing Tang; Masnou, Simon; Liming Chen

    2015-04-01

    Construction of a reliable graph capturing perceptual grouping cues of an image is fundamental for graph-cut based image segmentation methods. In this paper, we propose a novel sparse global/local affinity graph over superpixels of an input image to capture both short- and long-range grouping cues, and thereby enabling perceptual grouping laws, including proximity, similarity, continuity, and to enter in action through a suitable graph-cut algorithm. Moreover, we also evaluate three major visual features, namely, color, texture, and shape, for their effectiveness in perceptual segmentation and propose a simple graph fusion scheme to implement some recent findings from psychophysics, which suggest combining these visual features with different emphases for perceptual grouping. In particular, an input image is first oversegmented into superpixels at different scales. We postulate a gravitation law based on empirical observations and divide superpixels adaptively into small-, medium-, and large-sized sets. Global grouping is achieved using medium-sized superpixels through a sparse representation of superpixels' features by solving a ℓ0-minimization problem, and thereby enabling continuity or propagation of local smoothness over long-range connections. Small- and large-sized superpixels are then used to achieve local smoothness through an adjacent graph in a given feature space, and thus implementing perceptual laws, for example, similarity and proximity. Finally, a bipartite graph is also introduced to enable propagation of grouping cues between superpixels of different scales. Extensive experiments are carried out on the Berkeley segmentation database in comparison with several state-of-the-art graph constructions. The results show the effectiveness of the proposed approach, which outperforms state-of-the-art graphs using four different objective criteria, namely, the probabilistic rand index, the variation of information, the global consistency error, and the boundary displacement error.

  11. Molecular dynamics modeling framework for overcoming nanoshape retention limits of imprint lithography

    NASA Astrophysics Data System (ADS)

    Cherala, Anshuman; Sreenivasan, S. V.

    2018-12-01

    Complex nanoshaped structures (nanoshape structures here are defined as shapes enabled by sharp corners with radius of curvature <5 nm) have been shown to enable emerging nanoscale applications in energy, electronics, optics, and medicine. This nanoshaped fabrication at high throughput is well beyond the capabilities of advanced optical lithography. While the highest-resolution e-beam processes (Gaussian beam tools with non-chemically amplified resists) can achieve <5 nm resolution, this is only available at very low throughputs. Large-area e-beam processes, needed for photomasks and imprint templates, are limited to 18 nm half-pitch lines and spaces and 20 nm half-pitch hole patterns. Using nanoimprint lithography, we have previously demonstrated the ability to fabricate precise diamond-like nanoshapes with 3 nm radius corners over large areas. An exemplary shaped silicon nanowire ultracapacitor device was fabricated with these nanoshaped structures, wherein the half-pitch was 100 nm. The device significantly exceeded standard nanowire capacitor performance (by 90%) due to relative increase in surface area per unit projected area, enabled by the nanoshape. Going beyond the previous work, in this paper we explore the scaling of these nanoshaped structures to 10 nm half-pitch and below. At these scales a new "shape retention" resolution limit is observed due to polymer relaxation in imprint resists, which cannot be predicted with a linear elastic continuum model. An all-atom molecular dynamics model of the nanoshape structure was developed here to study this shape retention phenomenon and accurately predict the polymer relaxation. The atomistic framework is an essential modeling and design tool to extend the capability of imprint lithography to sub-10 nm nanoshapes. This framework has been used here to propose process refinements that maximize shape retention, and design template assist features (design for nanoshape retention) to achieve targeted nanoshapes.

  12. An Automated Blur Detection Method for Histological Whole Slide Imaging

    PubMed Central

    Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2013-01-01

    Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343

  13. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures

    NASA Astrophysics Data System (ADS)

    Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A.; Park, Jiwoong

    2017-10-01

    High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides--which represent one- and three-atom-thick two-dimensional building blocks, respectively--have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.

  14. Layer-by-layer assembly of two-dimensional materials into wafer-scale heterostructures.

    PubMed

    Kang, Kibum; Lee, Kan-Heng; Han, Yimo; Gao, Hui; Xie, Saien; Muller, David A; Park, Jiwoong

    2017-10-12

    High-performance semiconductor films with vertical compositions that are designed to atomic-scale precision provide the foundation for modern integrated circuitry and novel materials discovery. One approach to realizing such films is sequential layer-by-layer assembly, whereby atomically thin two-dimensional building blocks are vertically stacked, and held together by van der Waals interactions. With this approach, graphene and transition-metal dichalcogenides-which represent one- and three-atom-thick two-dimensional building blocks, respectively-have been used to realize previously inaccessible heterostructures with interesting physical properties. However, no large-scale assembly method exists at present that maintains the intrinsic properties of these two-dimensional building blocks while producing pristine interlayer interfaces, thus limiting the layer-by-layer assembly method to small-scale proof-of-concept demonstrations. Here we report the generation of wafer-scale semiconductor films with a very high level of spatial uniformity and pristine interfaces. The vertical composition and properties of these films are designed at the atomic scale using layer-by-layer assembly of two-dimensional building blocks under vacuum. We fabricate several large-scale, high-quality heterostructure films and devices, including superlattice films with vertical compositions designed layer-by-layer, batch-fabricated tunnel device arrays with resistances that can be tuned over four orders of magnitude, band-engineered heterostructure tunnel diodes, and millimetre-scale ultrathin membranes and windows. The stacked films are detachable, suspendable and compatible with water or plastic surfaces, which will enable their integration with advanced optical and mechanical systems.

  15. Cell-free protein synthesis: applications in proteomics and biotechnology.

    PubMed

    He, Mingyue

    2008-01-01

    Protein production is one of the key steps in biotechnology and functional proteomics. Expression of proteins in heterologous hosts (such as in E. coli) is generally lengthy and costly. Cell-free protein synthesis is thus emerging as an attractive alternative. In addition to the simplicity and speed for protein production, cell-free expression allows generation of functional proteins that are difficult to produce by in vivo systems. Recent exploitation of cell-free systems enables novel development of technologies for rapid discovery of proteins with desirable properties from very large libraries. This article reviews the recent development in cell-free systems and their application in the large scale protein analysis.

  16. Effect of groove width of modified current collector on internal short circuit of abused lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Shi, Yang; Noelle, Daniel J.; Le, Anh V.; Qiao, Yu

    2017-10-01

    In a lithium-ion battery (LIB), mechanical abuse often leads to internal short circuits (ISC) that trigger thermal runaway. We investigated a thermal-runaway mitigation (TRM) technique using a modified current collector. By generating surface grooves on the current collector, the area of electrodes directly involved in ISC could be largely reduced, which decreased the ISC current. The TRM mechanism took effect immediately after the LIB was damaged. The testing data indicate that the groove width is a critical factor. With optimized groove width, this technique may enable robust and multifunctional design of LIB cells for large-scale energy-storage units.

  17. Structural design of the Large Deployable Reflector (LDR)

    NASA Technical Reports Server (NTRS)

    Satter, Celeste M.; Lou, Michael C.

    1991-01-01

    An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.

  18. Structural design of the Large Deployable Reflector (LDR)

    NASA Astrophysics Data System (ADS)

    Satter, Celeste M.; Lou, Michael C.

    1991-09-01

    An integrated Large Deployable Reflector (LDR) analysis model was developed to enable studies of system responses to the mechanical and thermal disturbances anticipated during on-orbit operations. Functional requirements of the major subsystems of the LDR are investigated, design trades are conducted, and design options are proposed. System mass and inertia properties are computed in order to estimate environmental disturbances, and in the sizing of control system hardware. Scaled system characteristics are derived for use in evaluating launch capabilities and achievable orbits. It is concluded that a completely passive 20-m primary appears feasible for the LDR from the standpoint of both mechanical vibration and thermal distortions.

  19. Assessment of the State-of-the-Art in the Design and Manufacturing of Large Composite Structure

    NASA Technical Reports Server (NTRS)

    Harris, C. E.

    2001-01-01

    This viewgraph presentation gives an assessment of the state-of-the-art in the design and manufacturing of large component structures, including details on the use of continuous fiber reinforced polymer matrix composites (CFRP) in commercial and military aircraft and in space launch vehicles. Project risk mitigation plans must include a building-block test approach to structural design development, manufacturing process scale-up development tests, and pre-flight ground tests to verify structural integrity. The potential benefits of composite structures justifies NASA's investment in developing the technology. Advanced composite structures technology is enabling to virtually every Aero-Space Technology Enterprise Goal.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Jiangye

    Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less

  1. Scalable MWIR and LWIR optical system designs employing a large spherical primary mirror and small refractive aberration correctors

    NASA Astrophysics Data System (ADS)

    Beach, David A.

    2001-12-01

    Design variants of a recently developed optical imaging system have been computed for the thermal infrared spectral bands, which offer some advantages for long-range surveillance and astronomy. Only the spherical primary mirror has the full pupil diameter, all other components being sub-diameter, so scaling is possible up to relatively large pupils. Low-cost fabrication is enabled by the prevalence of spherical optical surfaces. Both MWIR and LWIR spectral transmissions are enabled by the choice of corrector materials, the examples given employing germanium and sapphire for 3.5 - 5.5 micrometers and germanium and zinc selenide for 3.5 - 5.5 micrometers and 8 - 12 micrometers passbands. Diffraction at these wavelengths is the main contributor to resolution constraints, so high numerical aperture values are preferred to enable a better match of blur spot diameter to generally available pixel dimensions. The systems described can routinely be designed to have speeds of f/0.8 or faster, while maintaining diffraction-limited performance over useful angular fields. Because the new design system employs a relayed catadioptric, it is possible to make the aperture stop of the system coincident with the window of the detector cryostat, enabling precise radiometric geometry. The central obscuration provides a convenient location for a calibration source, and both this and a mask for secondary spider supports can be included within the detector cold screen structure. Dual-band operation could be enabled by inclusion of a spectral beam splitter prior to a dual relay/imager system.

  2. Hysteresis-Free Carbon Nanotube Field-Effect Transistors.

    PubMed

    Park, Rebecca S; Hills, Gage; Sohn, Joon; Mitra, Subhasish; Shulaker, Max M; Wong, H-S Philip

    2017-05-23

    While carbon nanotube (CNT) field-effect transistors (CNFETs) promise high-performance and energy-efficient digital systems, large hysteresis degrades these potential CNFET benefits. As hysteresis is caused by traps surrounding the CNTs, previous works have shown that clean interfaces that are free of traps are important to minimize hysteresis. Our previous findings on the sources and physics of hysteresis in CNFETs enabled us to understand the influence of gate dielectric scaling on hysteresis. To begin with, we validate through simulations how scaling the gate dielectric thickness results in greater-than-expected benefits in reducing hysteresis. Leveraging this insight, we experimentally demonstrate reducing hysteresis to <0.5% of the gate-source voltage sweep range using a very large-scale integration compatible and solid-state technology, simply by fabricating CNFETs with a thin effective oxide thickness of 1.6 nm. However, even with negligible hysteresis, large subthreshold swing is still observed in the CNFETs with multiple CNTs per transistor. We show that the cause of large subthreshold swing is due to threshold voltage variation between individual CNTs. We also show that the source of this threshold voltage variation is not explained solely by variations in CNT diameters (as is often ascribed). Rather, other factors unrelated to the CNTs themselves (i.e., process variations, random fixed charges at interfaces) are a significant factor in CNT threshold voltage variations and thus need to be further improved.

  3. Automation of large scale transient protein expression in mammalian cells

    PubMed Central

    Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.

    2011-01-01

    Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074

  4. Parallel algorithm for multiscale atomistic/continuum simulations using LAMMPS

    NASA Astrophysics Data System (ADS)

    Pavia, F.; Curtin, W. A.

    2015-07-01

    Deformation and fracture processes in engineering materials often require simultaneous descriptions over a range of length and time scales, with each scale using a different computational technique. Here we present a high-performance parallel 3D computing framework for executing large multiscale studies that couple an atomic domain, modeled using molecular dynamics and a continuum domain, modeled using explicit finite elements. We use the robust Coupled Atomistic/Discrete-Dislocation (CADD) displacement-coupling method, but without the transfer of dislocations between atoms and continuum. The main purpose of the work is to provide a multiscale implementation within an existing large-scale parallel molecular dynamics code (LAMMPS) that enables use of all the tools associated with this popular open-source code, while extending CADD-type coupling to 3D. Validation of the implementation includes the demonstration of (i) stability in finite-temperature dynamics using Langevin dynamics, (ii) elimination of wave reflections due to large dynamic events occurring in the MD region and (iii) the absence of spurious forces acting on dislocations due to the MD/FE coupling, for dislocations further than 10 Å from the coupling boundary. A first non-trivial example application of dislocation glide and bowing around obstacles is shown, for dislocation lengths of ∼50 nm using fewer than 1 000 000 atoms but reproducing results of extremely large atomistic simulations at much lower computational cost.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  6. Compiled MPI: Cost-Effective Exascale Applications Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G; Quinlan, D; Lumsdaine, A

    2012-04-10

    The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less

  7. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  8. Time-Resolved Small-Angle X-ray Scattering Reveals Millisecond Transitions of a DNA Origami Switch.

    PubMed

    Bruetzel, Linda K; Walker, Philipp U; Gerling, Thomas; Dietz, Hendrik; Lipfert, Jan

    2018-04-11

    Self-assembled DNA structures enable creation of specific shapes at the nanometer-micrometer scale with molecular resolution. The construction of functional DNA assemblies will likely require dynamic structures that can undergo controllable conformational changes. DNA devices based on shape complementary stacking interactions have been demonstrated to undergo reversible conformational changes triggered by changes in ionic environment or temperature. An experimentally unexplored aspect is how quickly conformational transitions of large synthetic DNA origami structures can actually occur. Here, we use time-resolved small-angle X-ray scattering to monitor large-scale conformational transitions of a two-state DNA origami switch in free solution. We show that the DNA device switches from its open to its closed conformation upon addition of MgCl 2 in milliseconds, which is close to the theoretical diffusive speed limit. In contrast, measurements of the dimerization of DNA origami bricks reveal much slower and concentration-dependent assembly kinetics. DNA brick dimerization occurs on a time scale of minutes to hours suggesting that the kinetics depend on local concentration and molecular alignment.

  9. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    NASA Astrophysics Data System (ADS)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next-generation large-scale and high-resolution real-time environmental monitoring networks used in domains such as hydrology, geomorphology, and geophysics, as well as management applications such as flood early warning, and regulatory enforcement.

  10. Scaling up high throughput field phenotyping of corn and soy research plots using ground rovers

    NASA Astrophysics Data System (ADS)

    Peshlov, Boyan; Nakarmi, Akash; Baldwin, Steven; Essner, Scott; French, Jasenka

    2017-05-01

    Crop improvement programs require large and meticulous selection processes that effectively and accurately collect and analyze data to generate quality plant products as efficiently as possible, develop superior cropping and/or crop improvement methods. Typically, data collection for such testing is performed by field teams using hand-held instruments or manually-controlled devices. Although steps are taken to reduce error, the data collected in such manner can be unreliable due to human error and fatigue, which reduces the ability to make accurate selection decisions. Monsanto engineering teams have developed a high-clearance mobile platform (Rover) as a step towards high throughput and high accuracy phenotyping at an industrial scale. The rovers are equipped with GPS navigation, multiple cameras and sensors and on-board computers to acquire data and compute plant vigor metrics per plot. The supporting IT systems enable automatic path planning, plot identification, image and point cloud data QA/QC and near real-time analysis where results are streamed to enterprise databases for additional statistical analysis and product advancement decisions. Since the rover program was launched in North America in 2013, the number of research plots we can analyze in a growing season has expanded dramatically. This work describes some of the successes and challenges in scaling up of the rover platform for automated phenotyping to enable science at scale.

  11. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  12. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  13. Skate Genome Project: Cyber-Enabled Bioinformatics Collaboration

    PubMed Central

    Vincent, J.

    2011-01-01

    The Skate Genome Project, a pilot project of the North East Cyber infrastructure Consortium, aims to produce a draft genome sequence of Leucoraja erinacea, the Little Skate. The pilot project was designed to also develop expertise in large scale collaborations across the NECC region. An overview of the bioinformatics and infrastructure challenges faced during the first year of the project will be presented. Results to date and lessons learned from the perspective of a bioinformatics core will be highlighted.

  14. Security and Efficiency Concerns With Distributed Collaborative Networking Environments

    DTIC Science & Technology

    2003-09-01

    have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while

  15. Thirteenth Annual Acquisition Research Symposium. Acquisition Research: Creating Synergy for Informed Change. Volume 1

    DTIC Science & Technology

    2016-04-30

    renewable energy projects with a focus on novel onshore/offshore and small/large scale wind turbine designs for expanding their operational range and...ROA to estimate the values of maintenance options created by the implementation of PHM in wind turbines . When an RUL is predicted for a subsystem...predicted for the system. The section titled Example— Wind Turbine With an Outcome-Based Contract presents a case study for a PHM enabled wind

  16. Incorporation of Outcome-Based Contract Requirements in a Real Options Approach for Maintenance Planning

    DTIC Science & Technology

    2016-04-30

    focus on novel onshore/offshore and small/large scale wind turbine designs for expanding their operational range and increasing their efficiency at...of maintenance options created by the implementation of PHM in wind turbines . When an RUL is predicted for a subsystem, there are multiple choices...The section titled Example— Wind Turbine With an Outcome-Based Contract presents a case study for a PHM enabled wind turbine with and without an

  17. Chemical energy storage: Part of a systemic solution

    NASA Astrophysics Data System (ADS)

    Schlögl, Robert

    2017-07-01

    This paper is a primer into concepts and opportunities of chemical energy storage. Starting from the quest for decarbonisation we reveal the possibilities of chemical energy storage. We briefly discuss the critical role of catalysis as enabling technology. We concentrate on options of large-scale production of chemicals from CO2 and green hydrogen. We discuss one potential application of fueling future combustion engines that could run with minimal regulated emissions without exhaust purifications and legal tricks.

  18. Micro-optical-mechanical system photoacoustic spectrometer

    DOEpatents

    Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.

    2013-01-01

    All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.

  19. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  20. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  1. Increased Throughput and Sensitivity of Synchrotron-Based Characterization for Photovoltaic Materials

    DOE PAGES

    Morishige, Ashley E.; Laine, Hannu S.; Looney, Erin E.; ...

    2017-04-03

    Optimizing photovoltaic (PV) devices requires characterization and optimization across several length scales, from centimeters to nanometers. Synchrotron-based micro-X-ray fluorescence spectromicroscopy (μ-XRF) is a valuable link in the PV-related material and device characterization suite. μ-XRF maps of elemental distributions in PV materials have high spatial resolution and excellent sensitivity and can be measured on absorber materials and full devices. Recently, we implemented on-the-fly data collection (flyscan) at Beamline 2-ID-D at the Advanced Photon Source at Argonne National Laboratory, eliminating a 300 ms per-pixel overhead time. This faster scanning enables high-sensitivity (~10 14 atoms/cm 2), large-area (10 000s of μm 2), high-spatialmore » resolution (<;200 nm scale) maps to be completed within a practical scanning time. We specifically show that when characterizing detrimental trace metal precipitate distributions in multicrystalline silicon wafers for PV, flyscans can increase the productivity of μ-XRF by an order of magnitude. Additionally, flyscan μ-XRF mapping enables relatively large-area correlative microscopy. As an example, we map the transition metal distribution in a 50 μm-diameter laser-fired contact of a silicon solar cell before and after lasing. As a result, while we focus on μ-XRF of mc-Si wafers for PV, our results apply broadly to synchrotron-based mapping of PV absorbers and devices.« less

  2. Reverse Fluorescence Enhancement and Colorimetric Bimodal Signal Readout Immunochromatography Test Strip for Ultrasensitive Large-Scale Screening and Postoperative Monitoring.

    PubMed

    Yao, Yingyi; Guo, Weisheng; Zhang, Jian; Wu, Yudong; Fu, Weihua; Liu, Tingting; Wu, Xiaoli; Wang, Hanjie; Gong, Xiaoqun; Liang, Xing-Jie; Chang, Jin

    2016-09-07

    Ultrasensitive and quantitative fast screening of cancer biomarkers by immunochromatography test strip (ICTS) is still challenging in clinic. The gold nanoparticles (NPs) based ICTS with colorimetric readout enables a quick spectrum screening but suffers from nonquantitative performance; although ICTS with fluorescence readout (FICTS) allows quantitative detection, its sensitivity still deserves more efforts and attentions. In this work, by taking advantages of colorimetric ICTS and FICTS, we described a reverse fluorescence enhancement ICTS (rFICTS) with bimodal signal readout for ultrasensitive and quantitative fast screening of carcinoembryonic antigen (CEA). In the presence of target, gold NPs aggregation in T line induced colorimetric readout, allowing on-the-spot spectrum screening in 10 min by naked eye. Meanwhile, the reverse fluorescence enhancement signal enabled more accurately quantitative detection with better sensitivity (5.89 pg/mL for CEA), which is more than 2 orders of magnitude lower than that of the conventional FICTS. The accuracy and stability of the rFICTS were investigated with more than 100 clinical serum samples for large-scale screening. Furthermore, this rFICTS also realized postoperative monitoring by detecting CEA in a patient with colon cancer and comparing with CT imaging diagnosis. These results indicated this rFICTS is particularly suitable for point-of-care (POC) diagnostics in both resource-rich and resource-limited settings.

  3. Endogenous voltage gradients as mediators of cell-cell communication: strategies for investigating bioelectrical signals during pattern formation

    PubMed Central

    Adams, Dany S.; Levin, Michael

    2013-01-01

    Alongside the well-known chemical modes of cell-cell communication, we find an important and powerful system of bioelectrical signaling: changes in the resting voltage potential (Vmem) of the plasma membrane driven by ion channels, pumps and gap junctions. Slow Vmem changes in all cells serve as a highly conserved, information-bearing pathway that regulates cell proliferation, migration and differentiation. In embryonic and regenerative pattern formation and in the disorganization of neoplasia, bioelectrical cues serve as mediators of large-scale anatomical polarity, organ identity and positional information. Recent developments have resulted in tools that enable a high-resolution analysis of these biophysical signals and their linkage with upstream and downstream canonical genetic pathways. Here, we provide an overview for the study of bioelectric signaling, focusing on state-of-the-art approaches that use molecular physiology and developmental genetics to probe the roles of bioelectric events functionally. We highlight the logic, strategies and well-developed technologies that any group of researchers can employ to identify and dissect ionic signaling components in their own work and thus to help crack the bioelectric code. The dissection of bioelectric events as instructive signals enabling the orchestration of cell behaviors into large-scale coherent patterning programs will enrich on-going work in diverse areas of biology, as biophysical factors become incorporated into our systems-level understanding of cell interactions. PMID:22350846

  4. iPad: Semantic annotation and markup of radiological images.

    PubMed

    Rubin, Daniel L; Rodriguez, Cesar; Shah, Priyanka; Beaulieu, Chris

    2008-11-06

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

  5. Hollow microcarriers for large-scale expansion of anchorage-dependent cells in a stirred bioreactor.

    PubMed

    YekrangSafakar, Ashkan; Acun, Aylin; Choi, Jin-Woo; Song, Edward; Zorlutuna, Pinar; Park, Kidong

    2018-03-26

    With recent advances in biotechnology, mammalian cells are used in biopharmaceutical industries to produce valuable protein therapeutics and investigated as effective therapeutic agents to permanently degenerative diseases in cell based therapy. In these exciting and actively expanding fields, a reliable, efficient, and affordable platform to culture mammalian cells on a large scale is one of the most vital necessities. To produce and maintain a very large population of anchorage-dependent cells, a microcarrier-based stirred tank bioreactor is commonly used. In this approach, the cells are exposed to harmful hydrodynamic shear stress in the bioreactor and the mass transfer rates of nutrients and gases in the bioreactor are often kept below an optimal level to prevent cellular damages from the shear stress. In this paper, a hollow microcarrier (HMC) is presented as a novel solution to protect cells from shear stress in stirred bioreactors, while ensuring sufficient and uniform mass transfer rate of gases and nutrients. HMC is a hollow microsphere and cells are cultured on its inner surface to be protected, while openings on the HMC provide sufficient exchange of media inside the HMC. As a proof of concept, we demonstrated the expansion of fibroblasts, NIH/3T3 and the expansion and cardiac differentiation of human induced pluripotent stem cells, along with detailed numerical analysis. We believe that the developed HMC can be a practical solution to enable large-scale expansion of shear-sensitive anchorage-dependent cells in an industrial scale with stirred bioreactors. © 2018 Wiley Periodicals, Inc.

  6. Low-energy transmission electron diffraction and imaging of large-area graphene

    PubMed Central

    Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili

    2017-01-01

    Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials. PMID:28879233

  7. Low-energy transmission electron diffraction and imaging of large-area graphene.

    PubMed

    Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili

    2017-09-01

    Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials.

  8. Low-Temperature Soft-Cover Deposition of Uniform Large-Scale Perovskite Films for High-Performance Solar Cells.

    PubMed

    Ye, Fei; Tang, Wentao; Xie, Fengxian; Yin, Maoshu; He, Jinjin; Wang, Yanbo; Chen, Han; Qiang, Yinghuai; Yang, Xudong; Han, Liyuan

    2017-09-01

    Large-scale high-quality perovskite thin films are crucial to produce high-performance perovskite solar cells. However, for perovskite films fabricated by solvent-rich processes, film uniformity can be prevented by convection during thermal evaporation of the solvent. Here, a scalable low-temperature soft-cover deposition (LT-SCD) method is presented, where the thermal convection-induced defects in perovskite films are eliminated through a strategy of surface tension relaxation. Compact, homogeneous, and convection-induced-defects-free perovskite films are obtained on an area of 12 cm 2 , which enables a power conversion efficiency (PCE) of 15.5% on a solar cell with an area of 5 cm 2 . This is the highest efficiency at this large cell area. A PCE of 15.3% is also obtained on a flexible perovskite solar cell deposited on the polyethylene terephthalate substrate owing to the advantage of presented low-temperature processing. Hence, the present LT-SCD technology provides a new non-spin-coating route to the deposition of large-area uniform perovskite films for both rigid and flexible perovskite devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Integrated monitoring of wind plant systems

    NASA Astrophysics Data System (ADS)

    Whelan, Matthew J.; Janoyan, Kerop D.; Qiu, Tong

    2008-03-01

    Wind power is a renewable source of energy that is quickly gaining acceptance by many. Advanced sensor technologies have currently focused solely on improving wind turbine rotor aerodynamics and increasing of the efficiency of the blade design and concentration. Alternatively, potential improvements in wind plant efficiency may be realized through reduction of reactionary losses of kinetic energy to the structural and substructural systems supporting the turbine mechanics. Investigation of the complete dynamic structural response of the wind plant is proposed using a large-scale, high-rate wireless sensor network. The wireless network enables sensors to be placed across the sizable structure, including the rotating blades, without consideration of cabling issues and the economic burden associated with large spools of measurement cables. A large array of multi-axis accelerometers is utilized to evaluate the modal properties of the system as well as individual members and would enable long-term structural condition monitoring of the wind turbine as well. Additionally, environmental parameters, including wind speed, temperature, and humidity, are wirelessly collected for correlation. Such a wireless system could be integrated with electrical monitoring sensors and actuators and incorporated into a remote multi-turbine centralized plant monitoring and control system.

  10. Genome resequencing in Populus: Revealing large-scale genome variation and implications on specialized-trait genomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muchero, Wellington; Labbe, Jessy L; Priya, Ranjan

    2014-01-01

    To date, Populus ranks among a few plant species with a complete genome sequence and other highly developed genomic resources. With the first genome sequence among all tree species, Populus has been adopted as a suitable model organism for genomic studies in trees. However, far from being just a model species, Populus is a key renewable economic resource that plays a significant role in providing raw materials for the biofuel and pulp and paper industries. Therefore, aside from leading frontiers of basic tree molecular biology and ecological research, Populus leads frontiers in addressing global economic challenges related to fuel andmore » fiber production. The latter fact suggests that research aimed at improving quality and quantity of Populus as a raw material will likely drive the pursuit of more targeted and deeper research in order to unlock the economic potential tied in molecular biology processes that drive this tree species. Advances in genome sequence-driven technologies, such as resequencing individual genotypes, which in turn facilitates large scale SNP discovery and identification of large scale polymorphisms are key determinants of future success in these initiatives. In this treatise we discuss implications of genome sequence-enable technologies on Populus genomic and genetic studies of complex and specialized-traits.« less

  11. Generating multi-photon W-like states for perfect quantum teleportation and superdense coding

    NASA Astrophysics Data System (ADS)

    Li, Ke; Kong, Fan-Zhen; Yang, Ming; Ozaydin, Fatih; Yang, Qing; Cao, Zhuo-Liang

    2016-08-01

    An interesting aspect of multipartite entanglement is that for perfect teleportation and superdense coding, not the maximally entangled W states but a special class of non-maximally entangled W-like states are required. Therefore, efficient preparation of such W-like states is of great importance in quantum communications, which has not been studied as much as the preparation of W states. In this paper, we propose a simple optical scheme for efficient preparation of large-scale polarization-based entangled W-like states by fusing two W-like states or expanding a W-like state with an ancilla photon. Our scheme can also generate large-scale W states by fusing or expanding W or even W-like states. The cost analysis shows that in generating large-scale W states, the fusion mechanism achieves a higher efficiency with non-maximally entangled W-like states than maximally entangled W states. Our scheme can also start fusion or expansion with Bell states, and it is composed of a polarization-dependent beam splitter, two polarizing beam splitters and photon detectors. Requiring no ancilla photon or controlled gate to operate, our scheme can be realized with the current photonics technology and we believe it enable advances in quantum teleportation and superdense coding in multipartite settings.

  12. Polar firn air reveals large-scale impact of anthropogenic mercury emissions during the 1970s.

    PubMed

    Faïn, Xavier; Ferrari, Christophe P; Dommergue, Aurélien; Albert, Mary R; Battle, Mark; Severinghaus, Jeff; Arnaud, Laurent; Barnola, Jean-Marc; Cairns, Warren; Barbante, Carlo; Boutron, Claude

    2009-09-22

    Mercury (Hg) is an extremely toxic pollutant, and its biogeochemical cycle has been perturbed by anthropogenic emissions during recent centuries. In the atmosphere, gaseous elemental mercury (GEM; Hg degrees ) is the predominant form of mercury (up to 95%). Here we report the evolution of atmospheric levels of GEM in mid- to high-northern latitudes inferred from the interstitial air of firn (perennial snowpack) at Summit, Greenland. GEM concentrations increased rapidly after World War II from approximately 1.5 ng m(-3) reaching a maximum of approximately 3 ng m(-3) around 1970 and decreased until stabilizing at approximately 1.7 ng m(-3) around 1995. This reconstruction reproduces real-time measurements available from the Arctic since 1995 and exhibits the same general trend observed in Europe since 1990. Anthropogenic emissions caused a two-fold rise in boreal atmospheric GEM concentrations before the 1970s, which likely contributed to higher deposition of mercury in both industrialized and remotes areas. Once deposited, this toxin becomes available for methylation and, subsequently, the contamination of ecosystems. Implementation of air pollution regulations, however, enabled a large-scale decline in atmospheric mercury levels during the 1980s. The results shown here suggest that potential increases in emissions in the coming decades could have a similar large-scale impact on atmospheric Hg levels.

  13. Large-scale academic achievement testing of deaf and hard-of-hearing students: past, present, and future.

    PubMed

    Qi, Sen; Mitchell, Ross E

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the validity and reliability of using the Stanford for this special student population still require extensive scrutiny. Recent shifts in educational policy environment, which require that schools enable all children to achieve proficiency through accountability testing, warrants a close examination of the adequacy and relevance of the current large-scale testing of deaf and hard-of-hearing students. This study has three objectives: (a) it will summarize the historical data over the last three decades to indicate trends in academic achievement for this special population, (b) it will analyze the current federal laws and regulations related to educational testing and special education, thereby identifying gaps between policy and practice in the field, especially identifying the limitations of current testing programs in assessing what deaf and hard-of-hearing students know, and (c) it will offer some insights and suggestions for future testing programs for deaf and hard-of-hearing students.

  14. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    PubMed Central

    Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin

    2018-01-01

    Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405

  15. Scaling and biomechanics of surface attachment in climbing animals

    PubMed Central

    Labonte, David; Federle, Walter

    2015-01-01

    Attachment devices are essential adaptations for climbing animals and valuable models for synthetic adhesives. A major unresolved question for both natural and bioinspired attachment systems is how attachment performance depends on size. Here, we discuss how contact geometry and mode of detachment influence the scaling of attachment forces for claws and adhesive pads, and how allometric data on biological systems can yield insights into their mechanism of attachment. Larger animals are expected to attach less well to surfaces, due to their smaller surface-to-volume ratio, and because it becomes increasingly difficult to distribute load uniformly across large contact areas. In order to compensate for this decrease of weight-specific adhesion, large animals could evolve overproportionally large pads, or adaptations that increase attachment efficiency (adhesion or friction per unit contact area). Available data suggest that attachment pad area scales close to isometry within clades, but pad efficiency in some animals increases with size so that attachment performance is approximately size-independent. The mechanisms underlying this biologically important variation in pad efficiency are still unclear. We suggest that switching between stress concentration (easy detachment) and uniform load distribution (strong attachment) via shear forces is one of the key mechanisms enabling the dynamic control of adhesion during locomotion. PMID:25533088

  16. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE PAGES

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...

    2018-01-05

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  17. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  18. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  19. DupTree: a program for large-scale phylogenetic analyses using gene tree parsimony.

    PubMed

    Wehe, André; Bansal, Mukul S; Burleigh, J Gordon; Eulenstein, Oliver

    2008-07-01

    DupTree is a new software program for inferring rooted species trees from collections of gene trees using the gene tree parsimony approach. The program implements a novel algorithm that significantly improves upon the run time of standard search heuristics for gene tree parsimony, and enables the first truly genome-scale phylogenetic analyses. In addition, DupTree allows users to examine alternate rootings and to weight the reconciliation costs for gene trees. DupTree is an open source project written in C++. DupTree for Mac OS X, Windows, and Linux along with a sample dataset and an on-line manual are available at http://genome.cs.iastate.edu/CBL/DupTree

  20. Very large scale characterization of graphene mechanical devices using a colorimetry technique.

    PubMed

    Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer

    2017-06-08

    We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.

Top